frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Deepfaking Orson Welles's Mangled Masterpiece

https://www.newyorker.com/magazine/2026/02/09/deepfaking-orson-welless-mangled-masterpiece
1•fortran77•30s ago•1 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
1•nar001•2m ago•1 comments

SpaceX Delays Mars Plans to Focus on Moon

https://www.wsj.com/science/space-astronomy/spacex-delays-mars-plans-to-focus-on-moon-66d5c542
1•BostonFern•3m ago•0 comments

Jeremy Wade's Mighty Rivers

https://www.youtube.com/playlist?list=PLyOro6vMGsP_xkW6FXxsaeHUkD5e-9AUa
1•saikatsg•3m ago•0 comments

Show HN: MCP App to play backgammon with your LLM

https://github.com/sam-mfb/backgammon-mcp
1•sam256•5m ago•0 comments

AI Command and Staff–Operational Evidence and Insights from Wargaming

https://www.militarystrategymagazine.com/article/ai-command-and-staff-operational-evidence-and-in...
1•tomwphillips•5m ago•0 comments

Show HN: CCBot – Control Claude Code from Telegram via tmux

https://github.com/six-ddc/ccbot
1•sixddc•6m ago•1 comments

Ask HN: Is the CoCo 3 the best 8 bit computer ever made?

1•amichail•8m ago•0 comments

Show HN: Convert your articles into videos in one click

https://vidinie.com/
1•kositheastro•11m ago•0 comments

Red Queen's Race

https://en.wikipedia.org/wiki/Red_Queen%27s_race
2•rzk•11m ago•0 comments

The Anthropic Hive Mind

https://steve-yegge.medium.com/the-anthropic-hive-mind-d01f768f3d7b
2•gozzoo•14m ago•0 comments

A Horrible Conclusion

https://addisoncrump.info/research/a-horrible-conclusion/
1•todsacerdoti•14m ago•0 comments

I spent $10k to automate my research at OpenAI with Codex

https://twitter.com/KarelDoostrlnck/status/2019477361557926281
2•tosh•15m ago•1 comments

From Zero to Hero: A Spring Boot Deep Dive

https://jcob-sikorski.github.io/me/
1•jjcob_sikorski•16m ago•0 comments

Show HN: Solving NP-Complete Structures via Information Noise Subtraction (P=NP)

https://zenodo.org/records/18395618
1•alemonti06•21m ago•1 comments

Cook New Emojis

https://emoji.supply/kitchen/
1•vasanthv•23m ago•0 comments

Show HN: LoKey Typer – A calm typing practice app with ambient soundscapes

https://mcp-tool-shop-org.github.io/LoKey-Typer/
1•mikeyfrilot•26m ago•0 comments

Long-Sought Proof Tames Some of Math's Unruliest Equations

https://www.quantamagazine.org/long-sought-proof-tames-some-of-maths-unruliest-equations-20260206/
1•asplake•27m ago•0 comments

Hacking the last Z80 computer – FOSDEM 2026 [video]

https://fosdem.org/2026/schedule/event/FEHLHY-hacking_the_last_z80_computer_ever_made/
2•michalpleban•28m ago•0 comments

Browser-use for Node.js v0.2.0: TS AI browser automation parity with PY v0.5.11

https://github.com/webllm/browser-use
1•unadlib•29m ago•0 comments

Michael Pollan Says Humanity Is About to Undergo a Revolutionary Change

https://www.nytimes.com/2026/02/07/magazine/michael-pollan-interview.html
2•mitchbob•29m ago•1 comments

Software Engineering Is Back

https://blog.alaindichiappari.dev/p/software-engineering-is-back
2•alainrk•30m ago•1 comments

Storyship: Turn Screen Recordings into Professional Demos

https://storyship.app/
1•JohnsonZou6523•30m ago•0 comments

Reputation Scores for GitHub Accounts

https://shkspr.mobi/blog/2026/02/reputation-scores-for-github-accounts/
2•edent•33m ago•0 comments

A BSOD for All Seasons – Send Bad News via a Kernel Panic

https://bsod-fas.pages.dev/
1•keepamovin•37m ago•0 comments

Show HN: I got tired of copy-pasting between Claude windows, so I built Orcha

https://orcha.nl
1•buildingwdavid•37m ago•0 comments

Omarchy First Impressions

https://brianlovin.com/writing/omarchy-first-impressions-CEEstJk
2•tosh•42m ago•1 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
7•onurkanbkrc•43m ago•0 comments

Show HN: Versor – The "Unbending" Paradigm for Geometric Deep Learning

https://github.com/Concode0/Versor
1•concode0•44m ago•1 comments

Show HN: HypothesisHub – An open API where AI agents collaborate on medical res

https://medresearch-ai.org/hypotheses-hub/
1•panossk•47m ago•0 comments
Open in hackernews

Transfering Files with gRPC

https://kreya.app/blog/transfering-files-with-grpc/
52•CommonGuy•1w ago

Comments

sluongng•1w ago
https://github.com/googleapis/googleapis/blob/master/google/... is a more complete version of this. It supports resumable uploads, and the download can start from an offset within a file, allowing you to download only part of the file instead of the whole.

Another version of this is to use grpc to communicate the "metadata" of a download file, and then "side" load the file using a side channel with http (or some other light-weight copy methods). Gitlab uses this to transfer Git packfiles and serve git fetch requests iirc https://gitlab.com/gitlab-org/gitaly/-/blob/master/doc/sidec...

pipo234•1w ago
I understand some of the appeal of grpc, but resumable uploads and download offsets have long be part of plain http. (E.g. RFC 7233)

Relying on http has the advantage that you can leverage commodity infrastructure like caching proxies and CDN.

Why push protobuf over http when all you need is present in http already?

avianlyric•1w ago
Because you may already have robust and sensible gRPC infrastructure setup and working, and setting up the correct HTTP infrastructure to take advantage of all the benefits that plain old HTTP provides may not be worth it.

If moving big files around is a major part of the system you’re building, then it’s worth the effort. But if you’re only occasionally moving big files around, then reusing your existing gRPC infrastructure is likely preferable. Keeps your systems nice and uniform, which make it easier to understand later once you’ve forgotten what you originally implemented.

a-dub•1w ago
this.

also, http/s compatibility falls off in the long tail of functionality. i've seen cache layers fail to properly implement restartable http.

that said, making long transfers actually restartable, robust and reliable is a lot more work than is presented here.

chasil•1w ago
Is see that QUIC file transfer protocols are available, including a Microsoft SMB implementation.

These would be the ultimate in resumability and mobility between networks, assuming that they exploit the protocol to the fullest.

pipo234•1w ago
Simplicity makes sense, of course. I just hadn't considered a grpc-only world. But I guess that makes sense in today's Kubernetes/node/python/llm world where grpc is the glue that once was SOAP (or even CORBA).

Still, stateful protocols have a tendency to bite when you scale up. And HTTP is specifically designed to be stateless and you get scalability for free as long as you stick with plain GET requests...

jayd16•1w ago
gRPC runs over http. What infra would be missing?

If you happen to be on ASP.NET or Spring Boot its some boilerplate to stand up a plain http and gRPC endpoints side by side but I guess you could be running something more exotic than that.

hpdigidrifter•1w ago
http/2 is nothing like http/1

feel free to put them both behind load balancers and see how you go

sluongng•1w ago
The evolving schema is much more attractive than a bunch of plain text HTTP headers when you want to communicate additional metadata with the file download/upload.

For example, there are common metadata such as the digest (hash) of the blob, the compression algorithm, the base compression dictionary, whether Reed-Solomon is applicable or not, etc...

And like others have pointed out, having existing grpc infrastructure in place definitely helps using it a lot easier.

But yeah, it's a tradeoff.

ithkuil•1w ago
I like implementing this standard gRPC interface (of I already have a gRPC based project) because it allows me to reuse a troubleshooting utility I wrote that uses it:

https://github.com/mkmik/byter

aktau•1w ago
Perhaps worth mentioning: https://github.com/stapelberg/rsync-over-grpc.
CamouflagedKiwi•1w ago
I've done this before, using Google's semi-standard ByteStream messages. It works, but is a bit of work, and I really don't love how you're building on top of a protocol that completely solves streaming contents of arbitrary size, which gRPC drops, and you have to reinvent again in the application layer.

I know it's not easy to solve given how protobuf-centric it is, but this is the worst piece of gRPC for me. The 4MB limit is a terrible size, it's big enough to rarely hit in test cases but small enough it can hit you in production. If you control it all you can just lift that number to something arbitrarily big to avoid most things just failing (although you probably don't want to use that as an actual solution for streaming files of any size), but in practice a lot of "serious" setups end up contorting themselves remarkably to try to avoid that limit.

profsummergig•1w ago
Apparently the correct spelling is "transferring".
augusteo•1w ago
Building on sluongng's point about schema evolution: we ended up in a weird middle ground on a project where we used gRPC for metadata and presigned S3 URLs for the actual bytes.

The metadata schema changed constantly (new compression formats, different checksum algorithms, retry policies). Having protobufs for that was genuinely useful. But trying to pipe multi-gigabyte files through gRPC streams was painful. Memory pressure, connection timeouts on slow clients, debugging visibility was terrible.

S3 presigned URLs are the boring answer, but they work. Your object storage handles the hard parts (resumability, CDN integration, the actual bytes), and your gRPC service handles the interesting parts (authentication, metadata, business logic).

jeffbee•1w ago
Sending bulk data by reference is a common pattern. Even inside Google when I was there bulk data was sometimes placed on ephemeral storage and sent by reference, and 100MB was considered a "dangerously large" protobuf that would log a warning during decode.
kruador•1w ago
I would add a further advantage of plain HTTP (REST) compared to gRPC. Splitting the response into blocks and having the client request the next block, as in the gRPC solution, causes round-trip delays. The server can't send the second block of data until the client requests it, so the server is essentially idle until the client has received all packets of the first block, parsed them and generated the next request.

In contrast, while HTTP/2 does impose framing of streams, that framing is done entirely server-side. If all one end has to send to the other is a single stream, it'll be DATA frame after DATA frame for the same stream. The client is not required to acknowledge anything. (At least, nothing above the TCP layer!)

It probably wasn't noticeable in this experiment as, if I'm reading it correctly, the server and client were on the same box, but if you were separated by any significant distance, plain HTTP should be noticeably faster.

matttproud•1w ago
Method signatures in gRPC present a pandora's box of questions: https://matttproud.com/blog/posts/grpc-method-discipline.htm....

The questions aren't unique to gRPC, however; gRPC forces you to confront them early and explicitly IMO, which is not a bad thing.

tuetuopay•1w ago
And then a C programmer comes in and slams sendfile. That’s the main advantage of HTTP/1.1. Of course TLS throws a wrench in it, but once kTLS is actually good (ahem), it’ll work.

In all seriousness, don’t do large file transfers over gRPC, except in a pinch for small files. As soon as e.g API gateways are introduced in the mix, stuff can go south very quickly: increased allocation, GC pressure, network usage, etc. Just use presigned S3 URLs.

aurumque•1w ago
S3 also gives you multipart parallel uploads. Each part gets stored and then when you're done the concatenation is performed close to the storage layer.
tuetuopay•1w ago
Indeed. So many wheels in HTTP/1.1 that needs to be reinvented with gRPC.
jasonjei•1w ago
I have PTSD from Google Protobufs. Sometimes the cost of a less-efficient protocol or traditional REST is worth it over an overengineered solution. Protobufs can be fine, but it's largely overkill. Debugging with protobuf was the price we paid for an "efficient" protocol