It doesn't work on Firefox. It appears not to work on Chrome. The suggestion is to use Edge, which on Windows already gets 4K support in Netflix anyway.
Here's a 4K enabler that only enables 4K where it's already enabled.
The problem here is requiring hardware-attested DRM: Widevine L1 on Edge on Windows, and Apple FairPlay on Safari on MacOS. The only way to get hardware attested DRM is via browser specific (i.e.: native code) support that interfaces with the OS & GPU drivers. You can't get there through an extension.
I understand it spoofs all of the checks it can, but the only Chromium browser that supports Widevine L1 (a requirement for 4K) is Edge, so even if all of the check spoofing works, it still won't do 4K.
There's even a table in the README that describes this exact scenario.
> If you're paying for 4K but using Chrome, Firefox, or a setup Netflix doesn't "approve," you're stuck at 1080p or lower. This extension fixes that.
But I get the confusion though. I'm now second-guessing if I misread the README.It appears to only be useful on Edge on Windows.
Does Edge currently ship Widevine L1? Last time I checked it was Playready SL3000, but that was a while ago now.
There are sensible-ish technical reasons why they can't deliver DRM'd 4K on linux, but when browser extensions can upgrade you to 4K there are no excuses on the technical level.
But when I have to fiddle around for 30 Minutes to see a picture (it worked before until it suddenly didn't work anymore), pirating the movie is suddenly the better option. Because I certainly don't see a point in paying and wasting more of my time.
And the piracy cat and mouse game is stupid, as in the End it's always Available illegaly, except for the people developing and selling DRM
So you do studies, you look at the impact of quality changes to customer churn and then you move the line appropriately.
Here is a good thread on the topic: https://www.reddit.com/r/Piracy/comments/17ez7mi/how_come_it...
How does Netflix detect "suspicious" activity? Does $NFLX allow 4k streaming over GrapheneOS? If so, could you pin a different certificate and do some HTTP proxy traffic manipulation to obfuscate the device (presumably an Android phone) identity or otherwise work around the DRM?
I want to understand more about this but unfortunately the reddit thread is bits and pieces scattered amongst clueless commentary, making it challenging to wade through.
See AWS offering: (and probably what they use for Prime Video, Netflix has their own)
For large-scale per-viewer, implement a content identification strategy that allows you to trace back to specific clients, such as per-user session-based watermarking. With this approach, media is conditioned during transcoding and the origin serves a uniquely identifiable pattern of media segments to the end user. A session to a user-mapping service receives encrypted user ID information in the header or cookies of the request context and uses this information to determine the uniquely identifiable pattern of media segments to serve to the viewer. This approach requires multiple distinctly watermarked copies of content to be transcoded, with a minimum of two sets of content for A/B watermarking. Forensic watermarking also requires YUV decompression, so encoding time for 4K feature length content can take upwards of 20 hours. DRM service providers in the AWS Partner Network (APN) are available to aid in the deployment of per-viewer content forensics.
<https://docs.aws.amazon.com/wellarchitected/latest/streaming...>They also use a traitor tracing scheme (Tardos codes) such that if multiple pirates get together to try and remove the watermark they will fail, you would need an unreasonably large number of pirates to succeed for some length of time.
> They also use a traitor tracing scheme (Tardos codes) such that if multiple pirates get together to try and remove the watermark they will fail, you would need an unreasonably large number of pirates to succeed for some length of time.
Why?
They are designed to survive being recorded by a phone at an angle. The embedding is only 1-bit per segment which can be multiple megabytes.
> Why?
Tardos codes scale as the square of the number of traitors times a constant. For example, a movie would typically have 2000 segments -> 2000 bits of encoding. By my calculation, at around 7 traitors some start to skate by detection. And there are ways to make detection additive across leaked content, so with another 2000 all 7 will get caught. This is because while they may not score highly enough to be reliably accused, they will be under suspicion, and that suspicion can later be enhanced.
To be clear, what the traitors are doing is pooling all the segment versions they have available to them, and adversarially choose a segment at random. This is the best strategy they have, a close second is to choose the segment that the majority have.
Trying to remove the actual 1-bit watermark from the segment isn't typically feasible. Every segment will have a unique adjustment to encode it. The embedding algorithm will take a secret key.
Any idea what this looks like? I assume it's not visible to the human eye, but being able to survive this level of degradation is quite impressive.
It generally occurs as patterns which are slightly in the noise. Good systems pick locations where its easier to hide and turn it off when the scene would expose it. Usually when badly done increasing sharpness in a scene can help reveal it.
Basically, if you can damage the watermark the picture quality is bad enough that it's harming your viewing. You need to compress into crap SD quality to make it hard to detect and even then you'll get something.
You don't even need a complete pattern, if you can get enough fragments you can narrow down the possible identities until you have a high match probability. I.e. partial fingerprints or DNA match.
They don't use the highest frequencies as those watermarks are easy to obliterate, and they don't use the lowest frequencies as those would noticeably affect quality, the focus is generally on the mid range frequencies. However for A/B watermarking in particular which involve 1-bit watermarks, low frequencies may actually be fair game.
Keep in mind that when embedding watermarks of significant size (>100 bits) as for example if you want to create a camera that includes the serial of the device in every photo, error correcting codes would also be used. For 1-bit watermarks the error correction is likely ad-hoc and involves constructing some mathematical object (for example, a few real numbers derived from frames of a segment) which remains approximately fixed through transformations, you can afford to be wasteful.
For every segment in a video there will be two versions. Every user will get a unique sequence of segments served to them.
Also, I assume the file in question is 4K content. Don't know about how they treat other types.
If a single video has say 100 segments, you get more than enough unique combinations to guarantee uniqueness. There would of course have to be a mapping between user/device ID and segment order.
For hardware DRM schemes, the initial key material is typically provisioned during manufacturing.
Since the server-side is able to identify the client device, they can in theory fingerprint the content if they want to. That way if someone cracks and shares the content, they can look at the fingerprint and figure out which device (and which account) leaked it - and then ban them.
I've never seen direct evidence that Netflix fingerprints their 4K content (although I've never properly looked), so I suspect the device-burning thing might be a bit of an urban legend. But it is technically plausible.
What's the deal with Netflix's not-very-good 4k streams? Colour quantization or something? It's not just a one-off, why do 4k netflix shows look like rubbish compared to a moderately encoded whatever from bittorrent?
So, about 10GB or less on Netflix to 30GB or more on AppleTV+, dissected by DPI on my TP-Link Omada Gateway.
And indeed, i think it shows - i can't notice any banding or moire effect on pretty much any AppleTV+ content, while it is as clear as night and day that Netflix compresses the hell out of their content.
The other trick some groups use is so-called hybrid releases. This involves combining video and audio from multiple sources to achieve the best possible quality. These are usually explicitly tagged as HYBRID, and afaik mostly applies to 4K remuxes.
Bittorrent pirates may source shows from Netflix but they may also source them from other places with higher bit–rate encodings.
I don't want to copy things and distribute them to others. I want to have one copy that keeps working indefinitely and doesn't go away or fail to follow me across systems.
THIS EXTENSION DOES NOT WORK!
let me put it another way:
THIS EXTENSION DOES NOTHING USEFUL!
The author did not reverse engineer anything. He simply asked Claude Code to make this without testing or verifying any of the outputs.
The author did not check if the extension actually works. He simply asked Claude Code to make this without testing or verifying any of the outputs.
Other commenters in this thread have noted that this extension cannot do what it claims. [1] The author simply asked Claude Code to make this without testing or verifying any of the outputs.
Thanks for listening to my ted talk.
This extension tries to spoof HDCP status and codec support which is stupid and will not provide any benefit, since it is ultimately enforced by hardware.
But it also patches Cadmium to request a custom set of profiles which is useful and can improve compatibility: https://github.com/Pickle-Pixel/netflix-force-4k/blob/72e179...
For example, here's a set of profiles that makes 1080p work on Linux, as opposed to a mere 720p: https://github.com/DavidBuchanan314/Turbo-Recadmiumator/blob... (or at least it used to, I haven't tested it in ages)
The author said the extension did not work in Chrome.[1] But they did not respect other people to say this where everyone would see it and plainly.
Many such cases
It doesn't work in Chrome/Firefox, it works only in Edge.
Everything your computer can do is inspectable with correct application of nitric acid, electron microscopy, and image processing algorithms running on a supercomputer.
You could also try to get hired on the Widevine team or a GPU vendor. Corporate espionage, yay!
picklepixel•1w ago
Built an extension that spoofs all of these. The interesting discovery: you have to intercept every layer. Miss one and you're back to 1080p.
Here's the catch though. Even with all the JavaScript spoofs working, Chrome still won't get 4K. Netflix requires Widevine L1 (hardware DRM), and Chrome only has L3 (software). The browser literally can't negotiate the security level Netflix wants. Edge on Windows has L1, so the extension actually delivers 4K there.
So what's the point on Chrome? Honestly, not much for 4K specifically. But the reverse-engineering was the interesting part. Understanding how Netflix fingerprints devices and decides what quality to serve. The codebase documents all the APIs they check.
On Edge: works reliably, getting 3840x2160 at 15000+ kbps. On Chrome: spoofs work, DRM negotiation fails, stuck at 1080p.
The repo has detailed documentation on what each spoof does and why. Happy to discuss the technical approach or answer questions.
doctorpangloss•1w ago
but i cannot understand why someone would write comments on hacker news with an LLM. how could you say something was interesting, if you didn't even do it?
michaelt•1w ago
Netflix says "Ultra HD (2160p)" requires Microsoft Edge on Windows [1].
This is a "Netflix 4K Enabler" extension that spoofs being Microsoft Edge on Windows - but unless I'm misunderstanding, the extension only works on Microsoft Edge, on Windows.
Under what circumstances would a user want this extension?
[1] https://help.netflix.com/en/node/30081
stevemk14ebr•1w ago
Am I missing something?
arjie•1w ago