If you want to see what it should be, check any forgejo/codeberg repo.
Using Emacs liberated me from wasting my energy for crap like that. Why would I ever complain about GitHub changing/not having/breaking their UI, if I just want to browse files and the well-trodden path for doing just that has existed in my tool belt for years, why wouldn't I just use that?
For changed files we could implement CoW, so we could keep modified files without uploading them to remote.
For speedup we could cache files locally.
.... oh yeah, I guess we have all this (dired, ripgrep, lsp, speed) for free when running `git clone <url>` ?
The point about ripgrep worth considering though - maybe search command that hits the search API would be nice. There are some constraints though - 30 requests/minute (I think); it would only work with indexed branches (non main branches may not be); it only indexes files under certain size (something like less than 400Kb); there's no regex; All that makes me think maybe it's better to make it easy to clone/jump to cloned repo instead.
Pay08•2d ago
iLemming•2d ago
necovek•2d ago
iLemming•2d ago
- Tree listing. There's no raw HTTP URL that gives you a directory listing. raw.githubblabla.com can't do directory indexes. You'd have to shell out to git ls-tree ... etc over the ssh, which means essentially implementing a partial git client.
- Getting subtries is also problematic
- Branch listing and repo search - no git protocol equivalent for those - need the API
- Current approach fetches the entire tree in one API call. Doing the same over pack protocol means negotiating a fetch, receiving packfile data and parsing it. Much heavier, much more code.
We can only imagine a world where git's transport layer gives you a browsable filesystem interface. It doesn't - git's protocols are optimized for syncing object graphs, not random-access file browsing.
v9v•2d ago
iLemming•1d ago