It would be really annoying if this use case was made into an unreasonable hassle or killed entirely. (Alternatively, browser developers could've offered a real alternative, but it's a bit late for that now.)
Things like Jupyter Notebooks will presumably be unaffected by this, as they're not doing any cross-origin requests.
And likewise, when a command line tool wants you to log in with oauth2 and returns you to a localhost URL, it's a simple redirect not a cross-origin request, so should likewise be allowed?
It seems like you're thinking of a specific application, or at least use-case. Can you elaborate?
Once you're launching an application, it seems like the application can negotiate with the external site directly if it wants.
Sure you can make a fully cloud-reliant PW manager, which has to have your key stored in the browser and fetch your vault from the server, but a lot of us like having that information never have to leave our computers.
This allows you to use a single centrally hosted website as user interface, without the control traffic leaving your network. e.g. Plex uses this.
This is just about blocking cross-origin requests from other websites. I probably don't want every ad network iframe being able to talk to my router's admin UI.
A common example is this:
1. I visit ui.manufacturer.tld
2. I click "add device" and enter 192.168.0.230, repeating this for my other local devices.
3. The website ui.manufacturer.tld now shows me a dashboard with aggregate metrics from all my switches and routers, which it collects by fetch(...) ing data from all of them.
The manufacturers site is just a static page. It stores the list of devices and credentials to connect to them in localStorage.
None of the data ever leaves my network, but I can just bookmark ui.manufacturer.tld and control all of my devices at once.
This is a relatively neat approach providing the same comfort as cloud control, without the privacy nightmare.
While I certainly prefer stuff I can just self-host, compared to the modern cloud-only reality with WebUSB and stuff, this is a relatively clean solution.
It has a multitude of benefits comparing opening it from vendor site each time:
1) It works offline.
2) It works if vendor site is down.
3) It works if vendor restrict access to it due to acquisition, making it subscription-based, discontinuation of feature "because fuck you".
4) It works if vendor goes out of business or pivot to something else.
5) It still works with YOUR devices if vendor decides to drop support for old ones.
6) It still works with YOUR versions of firmwares if vendor decides to push new ones, with features which are user-hostile (I'm looking at you, BambuLab).
7) It cannot be compromised, as copy on vendor site can be If your system is compromised, you have bigger problems than forged UI for devices. Even best of vendors have data breaches this days.
8) It cannot upload your data if vendor goes rogue.
Downsides? If you really need to update it, you need to re-download it manually. Not a big hassle, IMHO.
Depending on the browser, file:/// is severely limited in what CORS requests are allowed.
And then there's products like Plex, where it's not a static site, but you still want a central dashboard that connects to your local Plex server directly via CORS.
People commonly use this to browse the collections of their own servers, and the servers of their friends, in a unified interface.
Media from friends is accessed externally, media from your own server is accessed locally for better performance.
And it is strange to me too. Local (on-disk) site is like local Electron app without bundling Chrome inside. Why it should be restricted when Electron app can do everything? It looks illogical.
if that software runs with a pull approach, instead of a push one, the server becomes unnecessary
bonus: then you won't have websites grossly probing local networks that aren't theirs (ew)
Note: this is why the viewers for these tools will spin up a local web server.
With local LLMs and AI it is now common to have different servers for different tasks (LLM, TTS, ASR, etc.) running together where they need to communicate to be able to create services like local assistants. I don't want to have to jump through hoops of running these through SSL (including getting a verified self-signed cert.), etc. just to be able to run a local web service.
For instance, my interaction with local LLMs involves 0 web browsers, and there's no reason facebook.com needs to make calls to my locally-running LLM.
Running HTML/XML files in the browser should be easier, but at the moment it already has the issues you speak of. It might make sense, IMO, for browsers to allow requests to localhost from websites also running on localhost.
It would be nice to know when a site is probing the local network. But by the same token, here is Google once again putting barriers on self sufficiency and using them to promote their PaaS goals.
They'll gladly narc on your self hosted application doing what it's supposed to do, but what about the 23 separate calls to Google CDN, ads, fonts, ect that every website has your browser make?
I tend to believe the this particular functionality is no longer of any use to Google, which is why they want to deprecate it to raise the barrier of entry for others.
I’d be interested in hearing what the folks at Ladybird think of this proposal.
I guess once this is added maybe the proposed device opt in mechanism could be used for applications to cooperatively support access without a permission prompt?
MacOS currently does this (per app, not per site) & most users just click yes without a second thought. Doing it per site might create a little more apprehension, but I imagine not much.
Besides that, approximately zero laypersons will have even the slightest clue what this permission means, the risks involved, or why they might want to prevent it. All they know is that the website they want is not working, and the website tells them to enable this or that permission. They will all blindly enable it every single time.
They're just two different approaches with the same flaw: People with no clue how tech works cannot completely protect themselves from any possible attacker, while also having sophisticated networked features. Nobody has provided a decent alternative other than some kind of fully bubble-wrapped limited account using Group Policies, to ban all those perms from even being asked for.
Remember when they used to mock this as part of their marketing?
Microsoft deserved to be mocked for that implementation.
Creation of a shortcut on Windows is not necessarily innocuous. It was a common first vector to drop malware as users were accustomed to installing software that did the same thing. A Windows shortcut can hide an arbitrary pathname, arbitrary command-line arguments, a custom icon, and more; these can be modified at any time.
So whether it was a mistake for UAC to be overzealous or obstructionist, or Microsoft was already being mocked for poor security, perhaps they weren’t wrong to raise awareness about such maneuvers.
If you want to teach users to ignore security prompts, then completely pointless nagging is how you do it.
For example, it's pretty straightforward what camera, push notification, or location access means. Contact sharing is already a stretch ("to connect you with your friends, please grant...").
"Local network access"? Probably not.
A random website someone linked me to wanting to access my local network is a very different case. I'm absolutely not giving network or location or camera or any other sort of access to websites except in very extreme circumstances.
Maybe this won't fool you, but it would trick 90% of internet users. (And even if it was 20% instead of 90%, that's still way too much.)
But I was just pointing out that, while I'll make good use of it, it still probably won't offer sufficient protection (from themselves) for most.
My parents who are non-technical click no by default to everything, sometimes they ask for my assistance when something doesn't work and often it's because they denied some permission that is essential for an app to work e.g. maybe they denied access to the microphone to an audio call app.
Unless we have statistics, I don't think we can make assumptions.
The moment a user gets this permissions request, as far as I can tell they will hit approve 100% of the time. We have one office where the staff have complained that it's impossible to look at astrology websites without committing to desktop popups selling McAfee. Which implies those staff, having been trained to hit "no", believe it's impossible to do.
(yes, we can disable with a GPO, which I heavily promote, but that org has political problems).
Yes, as a Chromecast user, please do give me a break from the prompts, macOS – or maybe just show them for Airplay with equal frequency and see how your users like that.
The issue is that CORS gates access only on the consent of the target server. It must return headers that opt into receiving requests from the website.
This proposal aims to tighten that, so that even if the website and the network device both actively want to communicate, the user's permission is also explicitly requested. Historically we assumed server & website agreement was sufficient, but Facebook's recent tricks where websites secretly talked to apps on your phone have broken that assumption - websites might be in cahoots with local network servers to work against your interests.
So the attack vector that I can imagine is that JS on the browser can issue a specially crafted request to a vulnerable printer or whatever that triggers arbitrary code execution on that other device. That code might be sufficient to cause the printer to carry out your evil task, including making an outbound connection to the attacker's server. Of course, the webpage would not be able to discover whether it was successful, but that may not be important.
Maybe there is some side-channel timing that can be used to determine the existence of a device, but not so sure about actually crafting and delivering a malicious payload.
I've been that sloppy with dev servers too. Usually not listening on port 80 but that's hardly Ft Knox.
The target application in this case was trying to validate incoming POST requests by checking that the incoming MIME type was "application/json". Normally, you can't make unauthorized XHR requests with this MIME type as CORS will send a preflight.
However, because of the way it was checking for this (checking if the Content-Type header contained the text "application/json"), It was relatively easy to construct a new Content-Type header that bypasses CORS:
Content-Type: multipart/form-data; boundary=application/json
It's worth bearing in mind in this case that the payload doesn't actually have to be form data - the application was expecting JSON, after all! As long as the web server doesn't do its own data validation (which it didn't in this case), we can just pass JSON as normal.
This was particularly bad because the application allowed arbitrary code execution via this endpoint! It was fixed, but in my opinion, something like that should never have been exposed to the network in the first place.
https://www.kb.cert.org/vuls/id/476267 is an article from 2001 on it.
As far as I can tell, the only thing this proposal does that CORS does not already do is provide some level of enterprise configuration control to guard against the scenario where your users are using compromised internet sites that can ping around your internal network for agents running on compromised desktops. Maybe? I don't get it.
If somebody would fix the "no https for local connections" issue, then IoT websites could use authenticated logins to fix both problems. Non-https websites also have no access to browser crypto APIs so roll-your-own auth (the horror) isn't an option either. Fustrating!
Wondering whether I triggered CORS requests when I was struggling with IPv6 problems. Or maybe it triggers when I redirect index.html requests from IPv6 to IPv4 addresses. Or maybe I got caught by the earlier roll out of version one of this propposal? There was definitely a time while I was developing pipedal when none of my images displayed because my web server wasn't doing CORS. But. Whatever my excuse might be, I was wrong. :-/
The browser will restrict the headers and methods of requests that can be sent in no-cors mode. (silent censoring in the case of headers, more specifically)
Anything besides GET, HEAD, POST will result in an error in browser, and not be sent.
All headers will be dropped besides the CORS safelisted headers [0]
And Content-Type must be one of urlencoded, form-data, or text-plain. Attempting to use anything else will see the header replaced by text-plain.
[0] https://developer.mozilla.org/en-US/docs/Glossary/CORS-safel...
What framework allows you to setup a misconfigured parser out of the box?
I dont mean that as a challenge, but as a server framework maintainer Im genuinely curious. In express we would definitely allow people to opt into this, but you have to explicitly make the choice to go and configure body-parser.json to accept all content types via a noop function for type checking.
Meaning, its hard to get into this state!
Edit to add: there are myriad ways to misconfigure a webserver to make it insecure without realizing. But IMO that is the point of using a server framework! To make it less likely devs will footgun via sane defaults that prevent these scenarios unless someone really wants to make a different choice.
I don’t know the exact frameworks, but I consume a lot of random undocumented backend APIs (web scraper work) and 95% of the time they’re fine with JSON requests with Content-Type: text/plain.
Does no-cors allow a nefarious company to send a POST request to a local server, running in an app, containing whatever arbitrary data they’d like? Yes, it does. When you control the server side the inability to set custom headers etc doesn’t really matter.
I didnt mean it to come across that way. The spec does what the spec does, we should all be aware of it so we can make informed decisions.
<img src="http://192.168.1.1/router?reboot=1">
triggers a local network GET request without any CORS involvement.Though to be fair, a lot of web frameworks have methods to bind named inputs that allow either.
In the modern web its much less of an issue due to samesite cookies being default .
You cant do a DELETE from a form. You have to use ajax. If cross DELETE needs preflight.
To nitpick, CSRF is not the ability to use forms per se, but relying solely on the existence of a cookie to authorize actions with side effects.
While it clearly isn't a hard guarantee, in practice it does seem to generally work as these have been known issues without apparent massive exploits for decades. That CORS restrictions block probing (no response provided) does help makes this all significantly more difficult.
It's not just HTTP where this is a problem. There are enough http-ish protocols where protocol smuggling confusion is a risk. It's possible to send chimeric HTTP requests at devices which then interpret them as a protocol other than http.
Also, form submission famously doesn't require CORS.
Note: preflight is not required for any type of request that browser js was capable of making prior to CORS being introduced. (Except for local network)
So a simple GET or POST does not require OPTIONS, but if you set a header it might require OPTIONS (unless its a header you could set in the pre-cors world)
... Anyhow I think it doesn't matter because you can listen for the error/failure of most async requests. Cors errors are equivalent to network errors - the browser tells the js it got stays v code 0 with no further information - but the timing of that could lead to some sort of interference? Hard to say what that would be though. Maybe if you knew the target webserver was slow but would respond to certain requests, a slower failed local request could mean it actually reached a target device.
That said, why not just fire off simple http requests with your intended payload? Abusing the csrf vulnerabilities of local network devices seems far easier than trying to make something out of a timing attack here.
https://security.stackexchange.com/questions/232345/ebay-web...
It may send an OPTIONS request, or not.
It may block a request being sent (in response to OPTIONS) or block a response from being read.
It may restrict which headers can be set, or read.
It may downgrade the request you were sending silently, or consider your request valid but the response off limits.
It is a matrix of independent gates essentially.
Even the language we use is imprecise, CORS itself is not really doing any of this or blocking things. As others pointed out it’s the Single Origin Policy that is the strict one, and CORS is really an exception engine to allow us to punch through that security layer.
Client: I consent
Server: I consent
User: I DON'T!
ISN'T THERE SOMEBODY YOU FORGOT TO ASK?
Typically there are only 256 IP's, so a scan of them all is almost instant.
False. CORS only gates non-simple requests (via options), simple requests are sent regardless of CORS config, there is no gating whatsoever.
Significant components of the browser, such as Websockets have no such restrictions at all
they also had some kind of RPC websocket system for game developers, but that appears to have been abandoned: https://discord.com/developers/docs/topics/rpc
https://github.com/adc/ctf-midnightsun2022quals-writeups/tre...
C'mon. We all know that 99% of the time, Access-Control-Allow-Origin is set to * and not to the specific IP of the web service.
Also, CORS is not in the control of the user while the proposal is. And that's a huge difference.
This isn't going to help for that. The locally installed app, and the website, can both, independently, open a connection to a 3rd party. There's probably enough fingerprinting available for the 3rd party to be able to match them.
It's called ZTA, Zero Trust Architecture. Devices shouldn't assume the LAN is secure.
This is a problem impacting mass users, not just technical ones.
What attack do you think doesn't have a solution? CSRF attacks? The solution is CSRF tokens, or checking the Origin header, same as how non-local-network sites protect against CSRF. DNS rebinding attacks? The solution is checking the Host header.
It's just the default. So far, browsers haven't really given different IP ranges different security.
evil.com is allowed to make requests to bank.com . Similarly, evil.com is allowed to make requests to foo.com even if foo.com DNS resolves to 127.0.0.1 .
I remember having "zone" settings in Internet Explorer 20 years ago, and ISTR it did IP ranges as well as domains. Don't think it did anything about cross-zone security though.
I deal with a third-party hosted webapp that enables extra when a webserver hosted on localhost is present. The local webserver exposes an API allowing the application to interact more closely with the host OS (think locally-attached devices and servers on the local network). If the locally-installed webserver isn't present the hosted app hides the extra functionality.
Limiting browser access to the localhost subnet (127.0.0.1/8) would be fine to me, as a sysadmin, so long as I have the option to enable it for applications where it's desired.
One of the solutions in the market was open source up to a point (Nowina NexU), but it seems it's gone from GitHub
For local network, you can imagine similar use cases — keep something inside the local network (eg. an API to an input device; imagine it being a scanner), but enable server-side function (eg. OCR) from their web page. With ZeroConf and DHCP domain name extensions, it can be a pretty seamless option for developers to consider.
https://www.youtube.com/watch?v=wLgcb4jZwGM&list=PL1y1iaEtjS...
It's not clear to me from Google's proposal if it also restricts access to localhost, or just your local network - it'd be great if it were both, as we clearly can't rely on third parties to lock down their local servers sufficiently!
edit: localhost won't be restricted:
"Note that local -> local is not a local network request, as well as loopback -> anything. (See "cross-origin requests" below for a discussion on potentially expanding this definition in the future.)"
It will be restricted. This proposal isn't completely blocking all localhost and local IPs. Rather, it's preventing public sites from communicating with localhost and local IPs. E.g:
* If evil.com makes a request to a local address it'll get blocked.
* If evil.com makes a request to a localhost address it'll get blocked.
* If a local address makes a request to a localhost address it'll get blocked.
* If a local address makes a request to a local address, it'll be allowed.
* If a local address makes a request to evil.com it'll be allowed.
* If localhost makes a request to a localhost address it'll be allowed.
* If localhost makes a request to a local address, it'll be allowed.
* If localhost makes a request to evil.com it'll be allowed.
Browers[sic] can't feasibly stop web pages from talking to private (local) IP addresses (2019)
https://utcc.utoronto.ca/~cks/space/blog/web/BrowsersAndLoca...
I understand why some companies want this, but doing it on the DNS level is a massive hack.
If I were the decision maker I would break that use case. (Chrome probably wouldn't though.)
GeoDNS and similar are very broadly used by services you definitely use every day. Your DNS responses change all the time depending on what network you're connecting from.
Further: why would I want my private hosts to be resolvable outside my networks?
Of course DNS responses should change depending on what network you're on.
In the linked article using the wrong DNS results in inaccessibility. GeoDNS is merely a performance concern. Big difference.
> why would I want my private hosts
Inaccessibility is different. We are talking about accessible hosts requiring different IP addresses to be accessed in different networks.
Let's say you have an internal employee portal. Accessing it from somewhere internal goes to an address in private space, while accessing it from home gives you the globally routable address. The external route might have more firewalls / WAFs / IPSes etc in the way. There's no other way you could possibly achieve this than by serving a different IP for each of the two networks, and you can do that through DNS, by having an internal resolver and an external resolver.
> but you could just have two different fqdns
Good luck training your employees to use two different URLs depending on what network they originate from.
Especially for universities it's very common to have the same hostname resolve to different servers, and provide different results, depending on whether you're inside the university network or not.
Some sites may require login if you're accessing them from the internet, but are freely accessible from the intranet.
Others may provide read-write access from inside, but limited read-only access from the outside.
Similar situations with split-horizon DNS are also common in corporate intranets or for people hosting Plex servers.
Ultimately all these issues are caused by NAT and would disappear if we switched to IPv6, but that would also circumvent the OP proposal.
Similarly the use case of read-write access from inside, but limited read-only access from the outside is also achievable by checking the source IP.
Take the following example (all IPs are examples):
1. University uses 10./8 internally, with 10.1./16 and 10.2./16 being students, 10.3./16 being admin, 10.4. being natsci institute, 10.5. being tech institute, etc.
2. You use radius to assign users to IP ranges depending on their group membership
3. If you access the website from one of these IP ranges, group membership is implied, otherwise you'll have to log in.
4. The website is accessible at 10.200.1.123 internally, and 205.123.123.123 externally with a CDN.
Without NAT, this would just work, and many universities still don't use NAT.
But with NAT, the website wont see my internal IP, just the gateway's IP, so it can't verify group membership.
In some situations I can push routes to end devices so they know 205.123.123.123 is available locally, but that's not always an option.
In this example the site is available externally through Cloudflare, with many other sites on the same IP.
So I'll have to use split horizon DNS instead.
You can use 203.0.113.0/24 in your examples because it is specifically reserved for this purpose by IETF/IANA: https://en.wikipedia.org/wiki/Reserved_IP_addresses#IPv4
In this case the comment you see is the third attempt, ultimately written on a phone (urgh), but I hope the idea came across nonetheless.
Listening on a specific port is one of the most basic things software can possibly do. What's next, blocking apps from reading files?
Plus, this is also about blocking your phone's browser from accessing your printer, your router, or that docker container you're running without a password.
Adding two Android permissions would fix this entire class of exploits: "run local network service", and "access local network services" (maybe with a whitelist).
https://learn.microsoft.com/en-us/previous-versions/troubles...
A proposal to treat webbrowsers as malware ? Why would a webbrowser connect to a socket/internet ?
I often see sites like Paypal trying to probe 127.0.0.1. For my "security", I'm sure...
[0] Filter Lists -> Privacy -> Block Outsider Intrusion into Lan
[1] <https://github.com/uBlockOrigin/uAssets/blob/master/filters/...>
One internal site I spend hours a day using has a 10.x.x.x IP address. The servers for that site are on the other side of the country and are many network hops away. It's a big company, our corporate network is very very large.
A better definition of "local IP" would be whether the IP is in the same subnet as the client, i.e. look up the client's own IP and subnet mask and determine if a packet to a given IP would need to be routed through the default gateway.
> Note that local -> local is not a local network request
So your use case won't be affected.
Like, at home, I have 10/8 and public IPv6 addresses.
1: https://www.rfc-editor.org/rfc/rfc1918
btw, I've seen that kind of network. I was young, and it took me a while to realize that they DHCP assign global IPs and double NAT it. That was weird.
The reality is that each network interface has at least one Internet address, and these should usually all be different.
An ordinary computer at home could be plugged into Ethernet and active on WiFi at the same time. The Ethernet interface may have an IPv4 address and a set of IPv6 addresses, and belong to their home LAN. The WiFi adapter and interface may have a different IPv4 address, and belongs to the same network, or some other network. The latter is called "multi-homing".
If you visit a site that reveals your "public" IP address(es), you may find that your public, routable IPv4 and/or IPv6 addresses differ from the ones actually assigned to your interfaces.
In order to be compliant with TCP/IP standards, your device always needs to respond on a "loopback" address in 127.0.0.0/8, and typically this is assigned to a "loopback" interface.
A network router does not identify with a singular IP address, but could answer to dozens, when many interface cards are installed. Linux will gladly add "alias" IPv4 addresses to most interface devices, and you'll see SLAAC or DHCPv6 working when there's a link-local and perhaps multiple routable IPv6 addresses on each interface.
The GP says that their work computer has a [public] routable IP address. But the same computer could have another interface, or even the same interface has additional addresses assigned to it, making it a member of that private 10.0.0.0/8 intranet. This detail may or may not be relevant to the services they're connecting to, in terms of authorization or presentation. It may be relevant to the network operators, but not to the end-user.
So as a rule of thumb: your device needs at least one IP address to connect to the Internet, but that address is associated with an interface rather than your device itself, and in a functional system, there are multiple addresses being used for different purposes, or held in reserve, and multiple interfaces that grant the device membership on at least one network.
The proposal here would consider that site local and thus allowed to talk to local. What are the implications? Your employer whose VPN you're on, or whose physical facility you're located in, can get some access to the LAN where you are.
In the case where you're a remote worker and the LAN is your private home, I bet that the employer already has the ability to scan your LAN anyway, since most employers who are allowing you onto their VPN do so only from computers they own, manage, and control completely.
It would block a site from scanning your other 10.x peers on the same network segment, thinking they’re “on your LAN” but that’s not a problem in my humble opinion.
Yes. That's a gross generalization.
I support applications delivered via site-to-site VPN tunnels hosted by third parties. In the Customer site the application is accessed via an RFC 1918 address. It is is not part of the Customer's local network, however.
Likewise, I support applications that are locally-hosted but Internet facing and appear on a non-RFC1918 IP address even though the server is local and part of the Customer's network.
Access control policy really should be orthogonal to network address. Coupling those two will enivtably lead to mismatches to work around. I would prefer some type of user-exposed (and sysadmin-exposed, centrally controllable) method for declaring the network-level access permitted by scripts (as identified by the source domain, probably).
Think of this proposal's definition of "local" (always a tricky adjective in networking, and reportedly the proposers here have bikeshedded it extensively) as encompassing both Local Area Network addresses and non-LAN "site local" addresses.
fc00::/8 (a network block for a registry of organisation-specific assignments for site-local use) is the idea that was abandoned.
Roughly speaking, the following are analogs:
169.254/16 -> fe80::/64 (within fe80::/10)
10/8, 172.16/12, 192.168/16 -> a randomly-generated network (within fd00::/8)
For example, a service I maintain that consists of several machines in a partial WireGuard mesh uses fda2:daf7:a7d4:c4fb::/64 for its peers. The recommendation is no larger than a /48, so a /64 is fine (and I only need the one network, anyway).
fc00::/7 is not globally routable.
e.g. Imagine the following OpenWRT setup:
ULA: fd9e:c023:bb5f::/48
(V)LAN 1: IPv6 assignment hint 1, suffix 1
(V)LAN 2: IPv6 assignment hint 2, suffix ffff
Clients on LAN 1 would be advertised the prefix fd9e:c023:bb5f:1::/64 and automatically configure addresses for themselves within it. The router itself would be reachable at fd9e:c023:bb5f:1::1.
Clients on LAN 2 would be advertised the prefix fd9e:c023:bb5f:2::/64 and automatically configure addresses for themselves within it. The router itself would be reachable at fd9e:c023:bb5f:2::ffff.
Clients on LAN 1 could communicate with clients on LAN 2 (firewall permitting) and vice versa by using these ULA addresses, without any IPv6 WAN connectivity or global-scope addresses.
Most websites that need this permission only need to access one local server. Granting them access to everything violates the principle of least privilege. Most users don't know what's running on localhost or on their local network, so they won't understand the risk.
I wish there were an API to build such a firewall, e.g. as a part of a browser extension, but also a simple default UI allowing to give access to a particular machine (e.g. router), to the LAN, to a VPN, based on the routing table, or to "private networks" in general, in the sense Windows ascribes to that. Also access to localhost separately. The site could ask one of these categories explicitly.
There was in Manifest V2, and it still exists in Firefox.
https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/Web...
That's the API Chrome removed with Manifest V3. You can still log all web requests, but you can't block them dynamically anymore.
Yes, which is why they also won't understand when the browser asks if you'd like to allow the site to visit http://localhost:3146 vs http://localhost:8089. A sensible permission message ("allow this site to access resources on your local network") is better than technical mumbo jumbo which will make them just click "yes" in confusion.
For instance, on the phishing site they clicked on from an email, they'll first be prompted like:
"Chase need to verify your Local Network identity to keep your account details safe. Please ensure that you click "Yes" on the following screen to confirm your identity and access account."
Yes, that's meaningless gibberish but most people would say:
• "Not sure what that means..."
• "I DO want to access my account, though."
In the world we live in, of course, almost nothing on your average LAN has an associated mDNS service advertisement.
(It's harder to do for the rest of the local network, though.)
I guess if the permissions dialog is sensibly worded then the user will allow it.
I think this is probably a sensible proposal but I'm sure it will break stuff people are relying on.
As noted in another comment this doesn't work unless the server responding provides proper CORS headers allowing the content to be loaded by the browser in that context: so for any request to work the server is either wide open (cors: *) or are cooperating with the requesting code (cors: website.co). The changes prevent communication without user authorization.
Use case 1 in the document and the discussion made it clear to me.
So what's the attack vector exactly? Why it would be able to attack a local device but not attack your Gmail account ( with your browser happily sending your auth cookies) or file:///etc/passwd ?
The only attack I can imagine is that _the mere fact_ of a webserver existing on your local IP is a disclosure of information for someone, but ... what's the attack scenario here again? The only thing they know is you run a webserver, and maybe they can check if you serve something at a specified location.
Does this even allow identifying the router model you use? Because I can think of a bazillion better ways to do it -- including the simple "just assume is the default router of the specific ISP from that address".
[1] https://developer.mozilla.org/en-US/docs/Web/Security/Same-o...
In fact, [1] literally says
> [Same-origin policy] prevents a malicious website on the Internet from running JS in a browser to read data from [...] a company intranet (which is protected from direct access by the attacker by not having a public IP address) and relaying that data to the attacker.
But this is trying to solve the problem in the wrong place. The problem isn't that the browser is making the connection, it's that the app betraying the user is running on the user's device. The Facebook app is malware. The premise of app store curation is that they get banned for this, right? Make everyone who wants to use Facebook use the web page now.
(And then corporate/enterprise managed Chrome installs could have specific subnets added to the allow list)
It’s insane to me that random internet sites can try to poke at my network or local system for any purpose without me knowing and approving it.
With all we do for security these days this is such a massive hole it defies belief. Ever since I first saw an enterprise thing that just expected end users to run a local utility (really embedded web server) for their website to talk to I’ve been amazed this hasn’t been shut down.
But if you're not using global addresses you're probably doing it wrong. Global addressing doesn't mean you're globally reachable, confusing addressing vs reachability is the source of a lot of misunderstandings. You can think of it as "everyone gets their own piece of unique address space, not routed unless you want it to be".
If you rely on users having to click "yes", then you're just making phones harder to use because everyone still using Facebook or Instagram will just click whatever buttons make the app work.
On the other hand, I have yet to come up with a good reason why arbitrary websites need to set up direct connections to devices within the local network.
There's the IPv6 argument against the proposed measures, which requires work to determine if an address is local or global, but that's also much more difficult to enumerate than the IPv4 space that some websites try to scan. That doesn't mean IPv4 address shouldn't be protected at all, either. Even with an IPv6-shaped hole, blocking local networks (both IPv4 and local IPv6) by default makes sense for websites originating from outside.
IE did something very similar to this decades ago. They also had a system for displaying details about websites' privacy policies and data sharing. It's almost disheartening to see we're trying to come up with solutions to these problems again.
I would guess it's closer to 0% than 0.1%.
Are there any common local web servers or services that use that as the default? Not that it’s not concerning, just wondering.
[1] https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/COR...
Talking about MCP agents if that’s not obvious.
TLDR, IIUC, right now, random websites can try accessing contents on local IPs. You can try to blind load e.g. http://192.168.0.1/cgi-bin/login.cgi from JavaScript, iterating through a gigantic malicious list of such known useful URLs, then grep and send back whatever you want to share with advertisers or try POSTing backdoors to printer update page. No, we don't need that.
Of course, OTOH, many webapps today use localhost access to pass tokens and to talk to cooperating apps, but you only need access to 127.0.0.0/8 for that which is harder to abuse, so that range can be default exempted.
Disabling this, as proposed, does not affect your ability to open http://192.168.0.1/login.html, as that's just another "web" site. If JS on http://myNAS.local/search-local.html wants to access http://myLaptop.local:8000/myNasDesktopAppRemotingApi, only then you have to click some buttons to allow it.
Edit: uBlock Origin has filter for it[1]; was unchecked in mine.
I disagree. I know it’s done, but I don’t think that makes it safe or smart.
Require the user to OK it and require the server to send a header with the one _exact_ port it will access. Require that the local server _must_ use CORS and allow that server.
No website not loaded from localhost should ever be allowed to just hit random local/private IPs and ports without explicit permission.
I have an struggled with this issue in the past. I have an IoT application whose websever wants to reject any requests from a non-local address. After failing to find a way to distinguish IPv6 local addresses, I ended up redirecting IPv6 requests to the local IPv4 address. And that was the end of that.
I feel like I would be in a better position to raise concerns if I could confirm that my understanding is correct: that there is no practical way for an application to determine whether an IPv6 address is link- or site-local.
I did experiment with IPv6 "link local" addresses, but these seem to be something else altogether different (for use by routers rather than general applications),and don't seem to work for regular application use.
There is some wiggle room provided by including .local address as local servers. But implementation of .local domains seems to be inconsistent across various OSs at present. Raspberry PI OS, for example, will do mDNS resolution of "some_address" but not of "someaddress.local"; Ubuntu 24.04 will resolve "someaddress.local", but not "someaddress". And neither will resolve "someaddress.local." (which I think was recommended at one point, but is now deprecated and non-functional). Which does seems like an issue worth raising.
And it frustrates the HECK out of me that nobody will allow use of privately issued certs for local network addresses. The "no https for local addresses" thing needs to be fixed.
No, because it's the antithesis of IPv6 which is supposed to be globally routable. The concept isn't supposed to exist.
Not to mention Google can't even agree on the meaning of "local" - the article states they completely changed the meaning of "local" to be a redefinition of "private" halfway through brainstorming this garbage.
Creating a nonstandard, arbitrary security boundary based on CIDR subnets as an HTTP extension is completely bonkers.
As for your application, you're going about it all wrong. Just assume your application is public-facing and design your security with that in mind. Too many applications make this mistake and design saloon-door security into their "local only" application which results in overreaction such as the insanity that is the topic of discussion here.
".local" is reserved for mDNS and is in the RFC, though this is frequently and widely ignored.
The concept is frequently misunderstood in that IPv4 consumer SOHO "routers" often combine a NAT and routing function with a firewall, but the functions are separate.
Does your router being slower and taking more CPU make you feel happy?
Do you enjoy not seeing the correct IP in remote logs, thus making debugging issues harder?
Do you like being able to naively nmap your local network fairly easily?
Also, I've seen lots of home firewalls which will identify a device based on MAC address for match criteria and let you set firewall rules based on those, so even if their IPv6 address does change often it still matches the traffic.
Maybe there’s a standard primer on how to grok ip6 addresses, and set up your network but I missed it.
Also devices typically take 2 or 4 ip6 addresses for some reason so keeping on top of them is even harder.
When just looking at hosts in your network with their routable IPv6 address, ignore the prefix. This is the first few segments, probably the first four in most cases for a home network (a /64 network) When thinking about firewall rules or having things talk to each other, ignore things like "temporary" IP addresses.
So looking at this example:
Connection-specific DNS Suffix . : home.arpa
IPv6 Address. . . . . . . . . . . : 2600:1700:63c9:a421::2000
IPv6 Address. . . . . . . . . . . : 2600:1700:63c9:a421:e17f:95dd:11a:d62e
Temporary IPv6 Address. . . . . . : 2600:1700:63c9:a421:9d5:6286:67d9:afb7
Temporary IPv6 Address. . . . . . : 2600:1700:63c9:a421:4471:e029:cc6a:16a0
Temporary IPv6 Address. . . . . . : 2600:1700:63c9:a421:91bf:623f:d56b:4404
Temporary IPv6 Address. . . . . . : 2600:1700:63c9:a421:ddca:5aae:26b9:a53c
Temporary IPv6 Address. . . . . . : 2600:1700:63c9:a421:fc43:7d0a:7f8:e4c8
Link-local IPv6 Address . . . . . : fe80::7976:820a:b5f5:39c3%18
IPv4 Address. . . . . . . . . . . : 192.168.20.59
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . : fe80::ec4:7aff:fe7f:d167%18
192.168.20.254
Ignore all those temporary ones. Ignore the longer one. You can ignore 2600:1700:63c9:a421, as that's going to be the same for all the hosts on your network, so you'll see it pretty much everywhere. So, all you really need to remember if you're really trying to configure things by IP address is this is whatever-is-my-prefix::2000.But honestly, just start using DNS. Ignore IP addresses for most things. We already pretty much ignore MAC addresses and rely on other technologies to automatically map IP to MAC for us. Its pretty simple to get a halfway competent DNS setup going on, so many home routers will have things going by default, and its just way easier to do things in general. I don't want to have to remember my printer is at 192.168.20.132 or 2600:1700:63c9:a421::a210 I just want to go to http://brother or ipp://brother.home.arpa and have it work.
But as you can see this is still an explosion of complexity for the home user. More than 4x (32 --> 128), feels like x⁴ (though might not be accurate).
I like your idea of "whatever..." There should be a "lan" variable and status could be shown factored, like "$lan::2000" to the end user perhaps.
I do use DNS all the time, like "printer.lan", "gateway.lan", etc. But don't think I'm using in the router firewall config. I use openwrt on my router but my knowledge of ipv6 is somewhat shallow.
example: 2001:db8::192.168.0.42
This makes it very easy to remember, correlate and firewall.
>>> from ipaddress import IPv6Address as address
>>> address('2001:db8::192.168.0.42')
IPv6Address('2001:db8::c0a8:2a')
>>> int('2a', 16)
42
Openwrt doesn't seem to make ipv6 static assignment easy unfortunately.Well, who can agree on this? Local network, private network, intranet, Tailscale and VPN, Tor? IPv6 ULA, NAT/CGNAT, SOCKS, transparent proxy? What resources are "local" to me and what resources are "remote"?
This is quite a thorny and sometimes philosophical question. Web developers are working at the OSI Layer 6-7 / TCP/IP Application Layer.
https://en.wikipedia.org/wiki/OSI_model#Comparison_with_TCP/...
Now even cookies and things like CSRF were trying to differentiate "servers" and "origins" and "resources" along the lines of the DNS hierarchy. But this has been fraught with complication, because DNS was not intended to delineate such things, and can't do so cleanly 100% of the time.
Now these proposals are trying to reach even lower in the OSI model - Layer 3, Layer 2. If you're asking "what is on my LAN" or "what is a private network", that is not something that HTTPS or web services are supposed to know. Are you going to ask them to delve into your routing table or test the network interfaces? HTTPS was never supposed to know about your netmask or your next-hop router.
So this is only one reason that there is no elegant solution for the problem. And it has been foundational to the way the web was designed: "given a uniform locator, find this resource wherever it may be, whenever I request it." That was a simpler proposition when the Web was used to publish interesting and encyclopedic information, rather than deliver applications and access sensitive systems.
The device is an IoT guitar pedal that runs on a Raspberry Pi. In performance, on stage, a Web UI runs on a phone or tablet over a hotspot connection on the PI, which is NOT internet connected (since there's no expectation that there's a Wi-Fi router or internet access at a public venue). OR the pi runs on a home wifi network, using a browser-hosted UI on a laptop or desktop. OR, I suppose over an away-from-home Wi-Fi connection at a studio or rehearsal space, I suppose.
It is not reasonable to expect my users to purchase domain names and certs for their $60 guitar pedal, which are not going to work anyway, if they are playing away from their home network. Nor is ACME provisioning an option because the device may be in use but unconnected to the internet for months at a time if users are using the Pi Hotspot at home.
I can't use password authentication to get access to the Pi Web server, because I can't use HTTPS to conceal the password, and browsers disable access to javascript crypto APIs on non non-HTTPS pages (not that I'd really trust myself to write javascript code to obtain auth tokens from the pi server anyway), so doing auth over an HTTP connection doesn't really strike me as a serious option either..
Nor is it reasonable to expect my non-technical users to spend hours configuring their networks. It's an IoT device that should be just drop and play (maybe with a one-time device setup that takes place on the Pi).
There is absolutely NO way I am going to expose the server to the open internet without HTTPS and password authentication. The server provides a complex API to the client over which effects are configured and controlled. Way too much surface area to allow anyone of the internet to poke around in. So it uses IP/4 isolation, which is the best I can figure out given the circumstances. It's not like I havem't given the problem serious consideration. I just don't see a solution.
The use case is not hugely different from an IoT toothbrush. But standards organizations have chosen to leave both my (hypothetical) toothbrush and my application utterly defenseless when it comes to security. Is it any surprise that IoT toothbrushes have security problems?
How would YOU see https working on a device like that?
> ".local" is reserved for mDNS and is in the RFC, though this is frequently and widely ignored.
Yes. That was my point. It is currently widely ignored.
I understand that setting it up to delineate is harder in practice. Therein lies the rub.
In old school IPv4 you would normally assign octet two to a site and octet three to a VLAN. Oh and you start with 10.
With IPv6 you have a lot more options.
All IPv6 devices have link local addresses - that's the LAN or local VLAN - a bit like APIPA.
Then you start on .local - that's Apple and DNS and the like and nothing to do with IP addresses. That's name to address.
You can do Lets Encrypt (ACME) for "local network addresses" (I assume you mean RFC 1918 addresses: 10/8, 172.16/12, 192.168/16) - you need to look into DNS-01 and perhaps DNS CNAME. It does require quite some effort.
There is a very good set of reasons why TLS certs are a bit of a bugger to get working effectively these days. There are solutions freely available but they are also quite hard to implement. At least they are free. I remember the days when even packet capture required opening your wallet.
You might look into acme.sh if Certbot fails to work for you. You also might need to bolt down IP addressing in general, IPv4 vs IPv6 and DNS and mDNS (and Bonjour) as concepts - you seem a little hazy on that lot.
Bon chance mate
NAT has rotted people's brains unfortunately. RFC 1918 is not really the way to tell if something is "local" or not. 25 years ago I had 4 publicly routable IPv4 addresses. All 4 of these were "local" to me despite also being publicly routable.
An IP address is local if you can resolve it and don't have to communicate via a router.
It seems too far gone, though. People seem unable to separate RFC 1918 from the concept of "local network".
And if it is just me, fine I'll jump in - they should also make it so that users have to approve local network access three times. I worry about the theoretical security implications that come after they only approve local network access once.
Why does a web browser need USB or Bluetooth support? They don’t.0
Browsers should not be the universal platform. They’ve become the universal attack vector.
As a developer, these standards prevent you from needing to maintain separate implementations for Windows/macOS/Linux/Android.
As a user, they let you grant and revoke sandbox permissions in a granular way, including fully removing the web app from your computer.
Browsers provide a great cross-platform sandbox and make it much easier to develop secure software across all platforms.
WebUSB and Web Bluetooth are opt-in when the site requests a connection/permission, as opposed to unlimited access by default for native apps. And if you don't want to use them, you can choose a browser that doesn't implement those standards.
What other platform (outside of web browsers) is a good alternative for securely developing cross-platform software that interacts with hardware?
> Browsers provide a great cross-platform sandbox and make it much easier to develop secure software across all platforms.
Sure, until advertising companies find ways around and through those sandboxes because browser authors want the browsers be capable of more, in the name of a cross platform solution. The more a browser can do, the more surface area the sandbox has. (An advertising company makes the most popular browser, by the way.)
> What other platform (outside of web browsers) is a good alternative for securely developing cross-platform software that interacts with hardware?
There isn’t one, other than maybe video game engines, but it doesn’t matter. OS vendors need to work to make cross-platform software possible; it’s their fault we need a cross-platform solution at all. Every OS is a construct, and they were constructed to be different for arbitrary reasons.
A good app-permission model in the browser is much more likely to happen, but I don’t see that really happening, either. “Too inconvenient for users [and our own in-house advertisers/malware authors]” will be the reason.
MacOS handles permissions pretty well, but it could do better. If something wants local network permission, the user gets prompted. If the user says no, those network requests fail. Same with filesystem access. Linux will never have anything like this, nor will Windows, but it’s what security looks like, probably.
Users will say yes to those prompts ultimately, because as soon as users have the ability to say “no” on all platforms, sites will simply gate site functionality behind the granting of those permissions because the authors of those sites want that data so badly.
The only thing that is really going to stop behavior like this is law, and that is NEVER going to happen in the US.
So, short of laws, browsers themselves must stop doing stupid crap like allowing local network access from sites that aren’t on the local network, and nonsense stuff like WebUSB. We need to give up on the idea that anyone can be safe on a platform when we want that platform to be able to do anything. Browsers must have boundaries.
Operating systems should be the police, probably, and not browsers. Web stuff is already slow as hell, and browsers should be less capable, not more capable for both security reasons and speed reasons.
Lately, every app I install, wants bluetooth access to scan all my bluetooth devices. I don't want that. At most, I want the app to have to declare in their manifest some specific device IDs (short list) that their app is allowed to connect to and have the OS limit their connections to only those devices. For for example the Bose App should only be able to see Bose devices, nothing else. The CVS (pharmacy app) should only be able to connect to CVS devices, whatever those are. All I know is the app asked for permission. I denied it.
I might even prefer if it had to register the device ids and then the user would be prompted, the same way camera access/gps access is prompted. Via the OS, it might see a device that the CVS.app registered for in its manifest. The OS would popup "CVS app would like to connect to device ABC? Just this once, only when the app is running, always" (similar to the way iOS handles location)
By id, I mean some prefix that a company registers for its devices. bose.xxx, app's manifest says it wants to connect to "bose.*" and OS filters.
Similarly for USB and maybe local network devices. Come up with an id scheme, have the OS prevent apps form connecting to anything not that id. Effectively, don't let apps browser the network, usb, bluetooth.
I have been told that WhatsApp does not let you name contacts without sharing your address book back to Facebook.
the problem is, the app must respect that.
WhatsApp, for all the hate it gets, does.
"Privacy" focused Telegram doesnt-- it wouldnt work unless I shared ALL my contacts-- when I shared a few, it kept complaining I had to share ALL
On Android Telegram works with denied access to the contacts and maintains its own, completely separate, contact list (shared with desktop Telegram and other copies logged in to same account). I'm using Telegram longer than I'm using smartphone and it has completely separate contact list (as it should be).
And WhatsApp cannot be used without access to contacts: it doesn't allow to create WatsApp-only contact and complains that it has no place to store it till you grant access to Phone contact list.
To be honest, I prefer to have separate contact lists on all my communication channel, and even sharing contacts between phone app and e-mail app (GMail) bothers me.
Telegram is good in this aspect, it can use its own contact list, not synchronized or shared with anything else, and WhatsApp is not.
I think this feature is pretty meaningless in the way that it’s implemented.
It’s also pretty annoying that applications know they have partial permission, so kept prompting for full permission all the time anyway.
Blame Apple and Google and their horrid BLE APIs.
An app generally has to request "ALL THE PERMISSIONS!" to get RSSI which most apps are using as a (really stupid, bug prone, broken) proxy for distance.
What everybody wants is "time of flight"--but for some reason that continues to be mostly unsupported.
Then again local app can run server with proxy that adds adds CORS headers to the proxied request and you can access any site via js fetch/xmlhttprequest interface, even extension is able to modify headers to bypass cors
Cors bypassing is just matter of editing headers whats really hard to or impossible to bypass in CSP rules,
Now facebook app itself is running such cors server proxy even without it an normal http or websocket server is enought to send metrics
Chrome already has flag to prevent locahost access still as said websocket can be used
Completely banning localhost is detrimental
Many users are using self hosted bookmarking, note app, pass managers like solutions that rely on local server
My relationship is with your site. If you want to outsource that to some other domain, do that on your servers, not in my browser.
Of course it was only later that cookies and scripting and low-trust networks were introduced.
The WWW was conceived as more of a "desktop publishing" metaphor, where pages could be formatted and multimedia presentations could be made and served to the public. It was later that the browser was harnessed as a cross-platform application delivery front-end.
Also, many sites do carefully try to guard against "linking out" or letting the user escape their walled gardens without a warning or disclaimer. As much as they may rely on third-party analytics and ad servers, most web masters want the users to remain on their site, interacting with the same site, without following an external link that would end their engagement or web session.
But then we would have had to educate users, and ad peddlers would have lost revenue.
The model we need isn’t a boolean form of trust, but rather capabilities and permissions on a per-app, per-site or per-vendor basis. We already know this, but it’s incredibly tricky to design, retrofit and explain. Mobile OSs did a lot here, even if they are nowhere near perfect. For instance, they allow apps (by default even) to have private data that isn’t accessible from other apps on the same device.
Whether the code runs in an app or on a website isn’t actually important. There is no fundamental reason for the web to be constrained except user expectations and the design of permission systems.
similar thread: https://news.ycombinator.com/item?id=44179276
There are so many excellent home automation and media/entertainment use cases for something like this.
Why not treat any local access as if it were an access to a microphone?
So thats user will be in control
cant just write a extension that blocks access to domains based on origin
So user can just add facebook.com as origin to block all facebook* sites from sending any request to any registered url in these case localhost/127.0.0.1 domains
DNR api allows blocking based on initiatorDomains
I get that this could happen on any OS, and the proposal is from browser maker's perspective. But what about the other side of things, an app (not necessarily browser) talking to arbitrary localhost address?
Web browsers use sandboxing to keep you safe.
Web browsers can portscan your local network.
Seems like a sleazy move to draw down even more user DNS traffic data, and a worse solution than the default mitigation policy in NoScript =3
OTOH it would be cool if random websites were able to open up and use ports on my computer's network, or even on my LAN, when granted permission of course. Browser-based file- and media sharing between my devices, or games if multi-person.
That's what WebRTC does. There's no requirement that WebRTC is used to send video and audio as in a Zoom/Meet call.
That's how WebTorrent works.
This is a much better approach.
Servers can do all the hard work of gathering content from here and there.
Is the so-called "modern" web browser too large and complex
I never asked for stuff like "websockets"; I have to disable it, why
I still prefer a text-only browser for reading HTML; it does not run Javascript, it does not do websockets, CSS, images or a gazillion other things; it does not even autoload resources
It is relatively small, fast and reliable; very useful
It can read larger HTML files that make so-called "modern" web browsers choke
It does not support online ad services
The companies like Google that force ads on www users are known for creating problems for www users and then proposing solutions to them; why not just stop creating the problems
The point is that gigantic, overly complex "browsers" designed for surveillance and advertising are the problem. They are not a solution.
sroussey•1d ago
[0] https://www.theregister.com/2025/06/03/meta_pauses_android_t...
will4274•3h ago