- https://news.ycombinator.com/item?id=9954870
- https://phoboslab.org/log/2015/07/play-gta-v-in-your-browser...
> You know what enterprise networks love? HTTP. HTTPS. Port 443. That’s it. That’s the list.
That's not enough.
Corporate networks also love to MITM their own workstations and reinterpret http traffic. So, no WebSockets and no Server-Side Events either, because their corporate firewall is a piece of software no one in the world wants and everyone in the world hates, including its own developers. Thus it only supports a subset of HTTP/1.1 and sometimes it likes to change the content while keeping Content-Length intact.
And you have to work around that, because IT dept of the corporation will never lift restrictions.
I wish I was kidding.
I try to remember many environments once likely supported Flash.
If you wanna kill corporate IT, you have to kill capitalism first.
playing devil's advocate for a second, but corpIT is also working with morons as employees. most draconian rules used by corpIT have a basis in at least one real world example. whether that example happened directly by one of the morons they manage or passed along from corpIT lore, people have done some dumb as things on corp networks.
Wherever Tech is a first class citizen and seat at the corporate table, it can be different.
They delegate that stuff. To the corporate IT department.
It's all CyberSecurity insurance compliance that in many cases deviates from security best practices.
For example, we got dinged on an audit because instead of using RSA4096, we used ed25519. I kid you not, their main complaint was there wasn't enough bits which meant it wasn't secure.
Auditors are snake oil salesman.
Because otherwise people do dumb stuff like pasting proprietary designs or PII into deepseek
It's purely illusory security, that doesn't protect anything but does levy a constant performance tax on nearly every task.
You and op can be right at the same time. You imply the rules probably help a lot even while imperfect. They imply that pretending rules alone are enough to be perfect is incomplete.
This is assuming the DLP service blocks the request, rather than doing something like logging it and reported to your manager and/or CIO.
>It's purely illusory security, that doesn't protect anything but does levy a constant performance tax on nearly every task.
Because you can't ask deepseek to extract some unstructured data for you? I'm not sure what the alternative is, just let everyone paste info into deepseek? If you found out that your data got leaked because some employee pasted some data into some random third party service, and that the company didn't have any policies/technological measures against it, would your response still be "yeah it's fine, it's purely illusory security"?
Unless the corporation is 100% in-office, I’d wager they do in fact make exceptions - otherwise they wouldn’t have a working videoconferencing system.
The challenge is getting corporate insiders to like your product enough to get it through the exception process (a total hassle) when the firewall’s restrictions mean you can’t deliver a decent demo.
The corporate firewall debate came up when we considered websockets at a previous company. Everyone has parroted the same information for so long that it was just assumed that websockets and corporate firewalls were going to cause us huge problems.
We went with websockets anyway and it was fine. Almost no traffic to the no-websockets fallback path, and the traffic that did arrive appeared to be from users with intermittent internet connections (cellular providers, foreign countries with poor internet).
I'm 100% sure there are still corporate firewalls out there blocking or breaking websocket connections, but it's not nearly the same problem in 2025 as it was in 2015.
If your product absolute must, no exceptions, work perfectly in every possible corporate environment then a fallback is necessary if you use websockets. I don't think it's a hard rule that websockets must be avoided due to corporate firewalls any more, though.
Then we ran into a network where WebSockets were blocked, so we switched to streaming http.
No trouble with streaming http using a standard content-type yet.
Request lives for longer than 15 sec? Fuck you.
Request POSTs some JSON? Maybe fuck you just a little bit, when we find certain strings in the payload. We won't tell you which though.
I started the first ISP in my area. We had two T1s to Miami. When HD audio and the rudiments of video started to increase in popularity, I'd always tell our modem customers, "A few minutes of video is a lifetime of email. Remember how exciting email was?"
I've had similar experiences in the past when trying to do remote desktop streaming for digital signage (which is not particularly demanding in bandwidth terms). Multicast streaming video was the most efficent, but annoying to decode when you dropped data. I now wonder how far I could have gone with JPEGs...
I love the style of this blog-post, you can really tell that Luke has been deep down in the rabbit hole, encountered the Balrog and lived to tell the tale.
JPEG is extremely efficient to [de/en]code on modern CPUs. You can get close to 1080p60 per core if you use a library that leverages SIMD.
I sometimes struggle with the pursuit of perfect codec efficiency when our networks have become this fast. You can employ half-assed compression and still not max out a 1gbps pipe. From Netflix & Google's perspective it totally makes sense, but unless you are building a streaming video platform with billions of customers I don't see the point.
You can have still have weird broken stallouts though.
I dunno, this article has some good problem solving but the biggest and mostly untouched issue is that they set the minimum h.264 bandwidth too high. H.264 can do a lot better than JPEG with a lot less bandwidth. But if you lock it at 40Mbps of course it's flaky. Try 1Mbps and iterate from there.
And going keyframe-only is the opposite of how you optimize video bandwidth.
From the article:
“Just lower the bitrate,” you say. Great idea. Now it’s 10Mbps of blocky garbage that’s still 30 seconds behind.
10Mbps is still way too high of a minimum. It's more than YouTube uses for full motion 4k.
And it would not be blocky garbage, it would still look a lot better than JPEG.
For mostly-static content at 4fps you can cut a bunch more bitrate corners before it looks bad. (And 2-3 JPEGs per second won't even look good at 1Mbps.)
Video players used to call it buffering, and resolving it was called buffering issues.
Players today can keep an eye on network quality while playing too, which is neat.
Bargaining.
smaller thing: many, many, moons ago, I did a lot of work with H.264. "A single H.264 keyframe is 200-500KB." is fantastical.
Can't prove it wrong because it will be correct given arbitrary dimensions and encoding settings, but, it's pretty hard to end up with.
Just pulled a couple 1080p's off YouTube, biggest I-frame is 150KB, median is 58KB (`ffprobe $FILE -show_frames -of compact -show_entries frame=pict_type,pkt_size | grep -i "|pict_type=I"`)
WebRTC over UDP is one choice for lossy situations. Media over Quic might be another (is the future here?), and it might be more enterprise firewall friendly since HTTP3 is over Quic.
Good engineering: when you're not too proud to do the obvious, but sort of cheesy-sounding solution.
> closes tab
Eh, there are a few easy things one can try. Make sure to use a non-ancient kernel on the sender side (to get the necessary features), then enable BBR and NOTSENT_LOWAT (https://blog.cloudflare.com/http-2-prioritization-with-nginx...) to avoid buffering more than what's in-flight and then start dropping websocket frames when the socket says it's full.
Also, with tighter integration with the h264 encoder loop one could tell it which frames weren't sent and account for that in pframe generation. But I guess that wasn't available with that stack.
Maybe because the basic frequency transform is 4x4 vs 8x8 for JPG?
You can run all WebRTC traffic over a single port. It’s a shame you spent so much time/were frustrated by ICE errors
That’s great you got something better and with less complexity! I do think people push ‘you need UDP and BWE’ a little too zealously. If you have a homogeneous set of clients stuff like RTMP/Websockets seems to serve people well
Or is it intra-only H.264?
I mean, none of this is especially new. It's an interesting trick though!
For a fast start of the video, reverse the implementation: instead of downgrading from Websockets to polling when connection fails, you should upgrade from polling to Websockets when the network allows.
Socket.io was one of the first libraries that did that switching and had it wrong first, too. Learned the enterprise network behaviour and they switched the implementation.
Screenshot once per second. Works everywhere.
I’m still waiting for mobile screenshare api support, so I could quickly use it to show stuff from my phone to other phones with the QR link.
I wonder if they just tried restarting the stream at a lower bitrate once it got too delayed.
The talk about how the images looks more crisp at a lower FPS is just tuning that I guess they didn't bother with.
https://developers.google.com/speed/webp/docs/webp_study
ALSO - the blog author could simplify - you don't need any code at all at the web browser.
The <img> tag automatically does motion jpeg streaming.
This would really cut down on the bandwidth of static coding terminals where 90% of screen is just cursor flashing or small bits of text moving.
If they really wanted to be ambitious they could also detect scrolling and do an optimization client-side where it translates some of the existing areas (look up CopyRect command in VNC).
Why not just send text? Why do you need video at all?
(Although the fact they decided to use Moonlight in an enterprise product makes me wonder if their product actually was vibe coded)
> You’re watching the AI type code from 45 seconds ago > > By the time you see a bug, the AI has already committed it to main > > Everything is terrible forever
Is this satire? I mean: if the solution for things to not be terrible forever consists in catching what an AI is doing in 45 seconds (!) before the AI commits to trunk, I'm sorry but you should seriously re-evaluate your life plans.
The standard supports adaptive bit rate playback so you can provide both low quality and high quality videos and players can switch depending on bandwidth available.
So only plausible thing to do was pre-build html pages for content pages and let load angular’s JS take its time to load ( for ux functionality). It looked like page flickered when JS loads for the first time but we solved the search engine problem.
> We added a keyframes_only flag. We modified the video decoder to check FrameType::Idr. We set GOP to 60 (one keyframe per second at 60fps). We tested.
Why muck around with P-frames and keyframes? Just make your video 1fps.
> Now it’s 10Mbps of blocky garbage that’s still 30 seconds behind.
10 Mbps is way too much. I occasionally watch YouTube videos where someone writes code. I set my quality to 1080p to be comparable with the article and YouTube serves me the video at way less than 1Mbps. I did a quick napkin math for a random coding video and it was 0.6Mbps. It’s not blocky garbage at all.
Temporal SVC (reduce framerate if bandwidth constrained) is pretty widely supported by now, right? Though maybe not for H.264, so it probably would have scaled nicely but only on Webrtc?
There is another recovery option:
- increase the JPEG framerate every couple seconds until the bandwidth consumption approaches the H264 stream bandwidth estimate
- keep track latency changes. If the client reports a stable latency range, and it is acceptable (<1s latency, <200ms variance?) and bandwidth use has reached 95% of H264 estimate, re-activate the stream
Given that text/code is what is being viewed, lower res and adaptive streaming (HLS) are not really viable solutions since they become unreadable at lower res.
If remote screen sharing is a core feature of the service, I think this is a reasonable next step for the product.
That said, IMO at a higher level if you know what you're streaming is human-readable text, it's better to send application data pipes to the stream rather than encoding screenspace videos. That does however require building bespoke decoders and client viewing if real time collaboration network clients don't already exist for the tools (but SSH and RTC code editors exist)
I believe the latter can be adjusted in codec settings.
This would make sense... if they were using UDP, but they are using TCP. All the JPEGs they send will get there eventually (unless the connection drops). JPEG does not fix your buffering and congestion control problems. What presumably happened here is the way they implemented their JPEG screenshots, they have some mechanism that minimizes the number of frames that are in-flight. This is not some inherent property of JPEG though.
> And the size! A 70% quality JPEG of a 1080p desktop is like 100-150KB. A single H.264 keyframe is 200-500KB. We’re sending LESS data per frame AND getting better reliability.
h.264 has better coding efficiency than JPEG. For a given target size, you should be able to get better quality from an h.264 IDR frame than a JPEG. There is no fixed size to an IDR frame.
Ultimately, the problem here is a lack of bandwidth estimation (apart from the sort of binary "good network"/"cafe mode" thing they ultimately implemented). To be fair, this is difficult to do and being stuck with TCP makes it a bit more difficult. Still, you can do an initial bandwidth probe and then look for increasing transmission latency as a sign that the network is congested. Back off your bitrate (and if needed reduce frame rate to maintain sufficient quality) until transmission latency starts to decrease again.
WebRTC will do this for you if you can use it, which actually suggests a different solution to this problem: use websockets for dumb corporate network firewall rules and just use WebRTC everything else
Hmm they must be doing something wrong, they're not that usually heavy.
You can then extract the frames from the video and reconstruct the original jpeg
Additionally, instead of converting to video, you can use the smaller images of the original, to progressively load the bigger image, ie. when you get the first frame, you have a lower quality version of the whole image, then as you get more frames, the code progressively adds detail with the extra pixels contained in each frame
It was a fun project, but the extra compression doesn’t work for all images, and I also discovered how amazing jpeg is - you can get amazing compression just by changing the quality/size ratio parameter when creating a file
This is the definition of over-engineering. I don't usually criticize ideas but this is so stupid my head hurts.
It appears that the writer has jumped to conclusions at every turn and it's usually the wrong one.
The reason that the simple "poll for jpeg" method works is that polling is actually a very crude congestion control mechanism. The sender only sends the next frame when the receiver has received the last frame and asks for more. The downside of this is that network latency affects the frame rate.
The frame rate issue with the polling method can be solved by sending multiple frame requests at a time, but only as many as will fit within one RTT, so the client needs to know the minimum RTT and the sender's maximum frame rate.
The RFB (VNC) protocol does this, by the way. Well, the thing about rtt_min and frame rate isn't in the spec though.
Now, I will not go though every wrong assumption, but as for this nonsense about P-frames and I-frames: With TCP, you only need one I-frame. The rest can be all P-frames. I don't understand how they came to the conclusion that sending only I-frames over TCP might help with their latency problem. Just turn off B-frames and you should be OK.
The actual problem with the latency was that they had frames piling up in buffers between the sender and the receiver. If you're pushing video frames over TCP, you need feedback. The server needs to know how fast it can send. Otherwise, you get pile-up and a bunch of latency. That's all there is to it.
The simplest, absolutely foolproof way to do this is to use TCP's own congestion control. Spin up a thread that does two things: encodes video frames and sends them out on the socket using a blocking send/write call. Set SO_SNDBUF on that socket to a value that's proportional to your maximum latency tolerance and the rough size of your video frames.
One final bit of advice: use ffmpeg (libavcodec, libavformat, etc). It's much simpler to actually understand what you're doing with that than some convoluted gstreamer pipeline.
My hard disk ended up filling up with tens of gigabytes of screenshots.
I lowered the quality. I lowered the resolution, but this only delayed the inevitable.
One day I was looking through the folder and I noticed well almost all the image data on almost all of these screenshots is identical.
What if I created some sort of algorithm which would allow me to preserve only the changes?
I spent embarrassingly long thinking about this before realizing that I had begun to reinvent video compression!
So I just wrote a ffmpeg one-liner and got like 98% disk usage reduction :)
robrain•2h ago
Thinks: why not send text instead of graphics, then? I'm sure it's more complicated than that...
bambax•1h ago
Snild•1h ago
jodrellblank•1h ago
Look at the end of the video, the photometry data count stops at "7996 kbytes received"(!)
> "Turns out, 40Mbps video streams don’t appreciate 200ms+ network latency. Who knew. “Just lower the bitrate,” you say. Great idea. Now it’s 10Mbps of blocky garbage"
Who could do anything useful with 10Mbps. :/
[1] https://en.wikipedia.org/wiki/File:Huygens_descent.ogv