Or maybe your IP/browser is questionable.
Personally, I like sourcehut (sr.ht)
(Joke's on Reddit, though, because Reddit content became pretty worthless since they did this, and everything before they did this was already publicly archived)
Since Microsoft is struggling to make ends meet, maybe they could throw a captcha or proof of work like Anubis by xe iaso.
They already disabled code search for unauthenticated users. Its totally plausible they will disable code browsing as well.
And seriously if they keep this up, with limits on their web interface but leave unauthenticated cloning allowed, I'd rather clone the repo than log in.
GitHub code browsing went south since microsoft bought them anyway. Having a simple proxy that clones a repo and serves it would solve problems with rate limits and their awful UX.
I'm using Firefox and Brave on Linux from a residential internet provider in Europe and the 429 error triggers consistantly on both browsers. Not sure I would consider my setup questionable considering their target audience.
The future is a .txt file of John Carmack pointing out how efficient software used to be, locked behind a repeating WAF captcha, forever.
Could you help me understand what the difference is between your point and the arguments MPAA and RIAA used to ruin the torrent users' lives they concluded were "thieves"?
As a rule of thumb, do you think people who are happy with the services they contribute content to being open access and wish them to remain so should be the ones who are forced to constantly migrate to new services to keep their content free?
When AI can perfectly replicate the browsing behavior of a human being, should Github restrict viewing a git repository to those who have verified blood biometrics or had their eyes scanned by an Orb? If they make that change, will you still place blame on "jackasses"?
Well the main difference is that this is being used to justify blocking and not demanding thousands of dollars.
> When AI can perfectly replicate the browsing behavior of a human being
They're still being jackasses because I'm willing to pay to give free service to X humans but not 20X bots pretending to be humans.
I’m not arguing you shouldn’t be annoyed by these changes, I’m arguing you should be mad at the right people. The scrapers violated the implicit contract of the open internet, and now that’s being made more explicit. GitHub’s not actually a charity, but they’ve been able to provide a free service in exchange for the good will and community that comes along with it driving enough business to cover their costs of providing that service. The scrapers have changed that math, as they did with every other site on the internet in a similar fashion. You can’t loot a store and expect them not to upgrade the locks - as the saying goes, the enemy gets a vote on your strategy, too.
I'd like to focus on your strongest point, which is the cost to the companies. I would love to know what that increase in cost looks like. You can install nginx on a tiny server and serve 10k rps of static content, or like 50 (not 50k) rps of a random web framework that generates the same content. So this increase in cost must be weighed against how efficient the software serving that content is.
If this Github post included a bunch of numbers and details demonstrating how they have reached the end of the line on optimizing their web frontend, they have ran out of things to cache, and the increase in costs is a real cause for concern to the company (not just a quick shave to the bottom line, not a bigger net/compute check written from Github to their owners), I'd throw my hands up with them and start rallying against the (unquestionably inefficient and on the line of hostile) AI agent scrapers causing the increase in traffic.
Because they did not provide that information, I have to assume that Github and Microsoft are doing this out of pure profit motivations and have abandoned any sense of commitment to open access of software. In fact, they have much to gain from building the walls of their garden up as high as they can get away with, and I'm skeptical their increase in costs is very material at all.
I would rather support services that don't camouflage as open and free software proponents one day and victims of a robbery on the next. I still think this question is important and valid: There is tons of software on Github written by users who wish for their work to remain open access. Is that the class of software and people you believe should be shuffled around into smaller and smaller services that haven't yet abandoned the commitments that allowed them to become popular?
I don't think many people were particularly sympathetic to people making money off piracy - by and large, people were upset because people committing piracy for personal use were getting hit with the kinds of fines and legal charges usually reserved for, well, people who make money off piracy.
> Am I wrong in assuming most of this scraping comes from people utilizing AI agents for things like AI-assisted coding?
Yes. The huge increases in traffic aren't from, say, Claude going and querying Github when you ask it to, it's from the scraping to drive the initial training process. Claude and the others know the first thing about code because Github and StackOverflow were part of their training corpus, because the companies which made them scraped the whole damn site and used it as part of their training data for making a ~competing product. That's what Github's reacting to, that's what Reddit reacted to, that's what everyone's been reacting to - it's the scraping of the data for training that's leading to these reactions.
To be clear, because I think this is maybe a core of our disagreement: The problem that's leading to this isn't LLM agents acting on behalf of a user - it's not that Cursor googled python code for you - it's that the various companies training the models are aggressively scraping everything they can get their hands on. It's not one request for one repo on behalf of one user, it's the wholesale scraping of everything on the site by a rival company to make a rival product, most likely in violation of terms of service and certainly in violation of anything that anyone could reasonably assume another corporate entity would stand for. Github's not mad at you, they're mad at OpenAI.
> There is tons of software on Github written by users who wish for their work to remain open access. Is that the class of software and people you believe should be shuffled around into smaller and smaller services that haven't yet abandoned the commitments that allowed them to become popular?
You store your money in a bank. The bank gets robbed repeatedly by an organized group of serial bank robbers, and increases security at the branch. You move your money to another bank, because the increased security annoys you. You understand the problem here may repeat itself elsewhere as well, right?
I do, and how is this to cap off our discussion:
>You move your money to another bank, because the increased security annoys you.
On my way out, I would quote Benjamin Franklin: "Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety."
I go out of my way to help my community, and I expect those I support to do the same. That bank could put more resources into investigating/catching the robbers who are attacking not just the bank but the security and liberty of their community, or it could treat every customer with more suspicion than they did yesterday. I know where the latter option leads, and I won't stand for it.
You're right that it repeats itself elsewhere. Often, I find.
I encountered this too once, but thought it was a glitch. Worrying if they can't sort it.
And not just generally degenerate bots? Or just one evil bot network?
Collateral damage of AI I guess
From a long-term, clean network I have been consistently seeing these “whoa there!” secondary rate limit errors for over a month when browsing more than 2-3 files in a repo.
My experience has been that once they’ve throttled your IP under this policy, you cannot even reach a login page to authenticate. The docs direct you to file a ticket (if you’re a paying customer, which I am) if you consistently get that error.
I was never able to file a ticket when this happened because their rate limiter also applies to one of the required backend services that the ticketing system calls from the browser. Clearly they don’t test that experience end to end.
That’s what I run on my personal server now.
Almost went with Gitea, but the ownership structure is murky, feature development seems to have plateaued, and they haven’t even figured out how to host their own code. It’s still all on GitHub.
I’ve been impressed by Forgejo. It’s so much faster than Github to perform operations, I can actually backup my entire corpus of data in a format that’s restorable/usable, and there aren’t useless (AI) upsells cluttering my UX.
For listeners at home wondering why you'd want that at all:
I want a centralized Git repo where I can sync config files from my various machines. I have a VPS so I just create a .git directory and start using SSH to push/pull against it. Everything works!
But then, my buddy wants to see some of my config files. Hmm. I can create an SSH user for him and then set the permissions on that .git to give him read-only access. Fine. That works.
Until he improves some of them. Hey, can I give him a read-write repo he can push a branch to? Um, sure, give me a bit to think this through...
And one of his coworkers thinks this is fascinating and wants to look, too. Do I create an SSH account for this person I don't know well at all?
At this point, I've done more work than just installing something like Forgejo and letting my friend and his FOAF create accounts on it. There's a nice UI for configuring their permissions. They don't have SSH access directly into my server. It's all the convenience of something like GitHub, except entirely under my control and I don't have to pay for private repos.
Forgejo promised — but is yet to deliver any — interesting features like federation; meanwhile the real features they've been shipping are cosmetic changes like being able to set pronouns in your profile (and then another 10 commits to improve that...)
If you judge by very superficial metrics like commit counts, forgejo's count is heavily inflated by merges (which gitea development process doesn't use, preferring rebase), and frequent dependency upgragdes. When you remove that, the remaining commits represent maybe half of gitea's development activity.
So I expect to observe both for another year before deciding on where to upgragde. They're too similar at the moment.
FWIW, one of gitea larger users — Blender — continues to use and sponsor gitea and has no plans to switch AFAIK.
git log --since="1 year ago" --format="%an" | sort | uniq -c | sort -n | wc -l
to get an overview of things. That showed 153 people (including a small handful of bots) contributing to Gitea, and 232 people (and a couple bots) contributing to Forgejo. There are some dupes in each list, showing separate accounts for "John Doe" and "johndoe", that kind of thing, but the numbers look small and similar to me so I think they can be safely ignored.And it looks to me like Forgejo is using a similar process of combining lots of smaller PR commits into a single merge commit. The wide majority of its commits since last June or so seem to be 1-commit-per-PR. Changing the above command to `--since="2024-07-1"` reduces the number of unique contributors to 136 for Gitea, 217 for Forgejo. It also shows 1228 commits for Gitea and 3039 for Forgejo, and I do think that's a legitimately apples-to-apples comparison.
If we brute force it and run
git log --since="1 year ago" | rg '\(\#\d{4,5}\)' | wc -l
to match lines that mention a PR (like "Simplify review UI (#31062)" or "Remove `title` from email heads (#3810)"), then I'm seeing 1256 PR-like Gitea commits and 2181 Forgejo commits.And finally, their respective activity pages (https://github.com/go-gitea/gitea/pulse/monthly and https://codeberg.org/forgejo/forgejo/activity/monthly) show a similar story.
I'm not an expert in methodology here, but from my initial poking around, it would seem to me that Forgejo has a lot more activity and variety of contributors than Gitea does.
Or randomly when clicking through a repository file tree. The first time I hit a rate limit was when I was skimming through a repository on my phone, and about the 5th file I clicked I was denied and locked out. Not for a few seconds either, it lasted long enough that I gave up on waiting then refreshing every ~10 seconds.
Yes, it does not look like an intended service usage, but I used it for a demo: https://github.com/ClickHouse/web-tables-demo/
Anyway, will try to do the same with GitHub pages :)
Those of us who self-host git repos know that this is not true. Over at ardour.org, we've passed the 1M-unique-IP's banned due to AI trawlers sucking our repository 1 commit at a time. It was killing our server before we put fail2ban to work.
I'm not arguing that the specific steps Github have taken are the right ones. They might be, they might not, but they do help to address the problem. Our choice for now has been based on noticing that the trawlers are always fetching commits, so we tweaked things such that the overall http-facing git repo works, but you cannot access commit-based URLs. If you want that, you need to use our github mirror :)
They've pretty widely chosen to not do work and just slam websites from proxy IPs instead.
You would think their products would be used by them to do the work if they worked as well as advertised...
I would more readily assume a large social networking company filled with bright minds would have worked out some kind of agreement on, say, a large corpus of copyrighted training data before using it.
It's the wild wild west right now. Data is king for AI training.
However, these smaller companies are doing ridiculous things like scraping the same site many thousands of times a day, far more often than the content of the sites change.
This is egregious behavior because Microsoft hasn't been upfront about this while they were doing this. Many open source projects are probably unaware that their issue tracker has been walled off, creating headaches unbeknownst to them.
Is 20 years too long ago to learn from then?
Embrace. Extend. Extinguish. This has never gone away.
It still is "open to all", but you can't abuse the service and expect to retain the ability to abuse the service.
Also where is "silently" coming from? This whole HN page is because someone linked to an article announcing the change...
I'm not really a fan of Microsoft anymore, but some of you have (apparently long ago) turned the corner into "anything Microsoft does that I don't want Microsoft to do is clearly Microsoft being evil" and that is simply not a reality-based viewpoint. sometimes Microsoft is doing something which one could consider "evil", but without knowledge that something evil is happening, you're assuming that evil is happening, and that's not really a valid way to think about things if you want to be heard by anyone.
> years
No.
> a few
I’ve always considered “a few” to be “between 3 and 12” and 60 is more than “a few”.
If you need more proof, this is last year:
https://news.ycombinator.com/item?id=39322838
And this is another year before that:
https://news.ycombinator.com/item?id=36254129
Oh look, there's even visual proof in the discussion:
https://imgur.com/a/github-search-gated-behind-login-BT6uRIe
In another browser I log in because I do work with code in GitHub frequently. I comment on issues and PRs and all the normal stuff.
I regularly drive two browsers, yes. I alternate between them multiple times per minute, often. In one, I am not logged in. In the other, I am logged in.
Not once have I hit any anonymous rate limit.
I respect one’s desire to use something without logging in, that’s fine. But what you do when you use up the free tier of a service is one of the following: A) you pay for the next tier, B) you (in this case) log in so that your usage is no longer considered “anonymous”, or C) you wait for the next usage measurement period to begin so that you can resume.
It’s their service and they can decide how they want to provide it, in the exact same way that you can decide how to provide any services that you might provide.
If it is your privacy that you are considering by not having an account, fine. By making that choice you are limiting yourself to whatever the services you use decide to give you, and you are entitled to nothing.
“I could do more in the past!” So what? They decided to let you do more in the past, and now they’ve decided to let you do less. They don’t owe you free services; you choose to use the free service and by doing so you’ve chosen to be bound by any usage caps that they decide to apply to you.
Nobody owes you free services AT ALL, but you’re getting them anyway. Instead of feeling entitled to more than you’re getting, maybe be thankful for what you have.
I have a really hard time believing you on this. There's visual evidence from a year ago and it's consistent with my experience. And no, I haven't been hammering their servers.
https://imgur.com/a/github-search-gated-behind-login-BT6uRIe
> “I could do more in the past!” So what?
So, I'll repeat what I said in the first comment that you replied to. GitHub captured the open source ecosystem under the premise that its code and issue tracker will remain open to all. Silently changing the deal afterwards is reprehensible.
> Instead of feeling entitled
Again, I'll just repeat yet another one of my comments. Microsoft didn't just give, they're benefitting massively from open source. And they're looking to extract even more value through data mining from forced logins and stealing GPL licensed code by laundering it using AI. Open source projects that chose GitHub didn't agree to this!
> be thankful for what you have
You can't be serious. Yeah, be grateful for the trillion dollar company buying a service it didn't create, extracting as much value as they can from it in questionable ways and tearing up social contract!
Kind of a rhetorical question I guess, for a while I maintained a small open source project and yes, I still get entitled “why did you even publish this if you’re not going to fix the bug I reported” comments. Like, sorry, but my life priorities changed over the intervening 15 years. Fork it and fix it.
But there is obligation. I’m asking if contributing to open source creates an obligation to do so forever, either for individuals or companies.
There are versions of iptables available that apparently can scale to 1M+ addresses, but our approach is just to unban all at that point, and then let things accumulate again.
Since we because responding with 404 to all commit URLs, the rate of banned address accumulation has slowed down quite a bit.
I mean...
* Github is owned by Microsoft.
* The reason for this are AI crawlers.
* The reason AI crawlers exist in masses is an absurd hype around LLM+AI technology.
* The reason for that is... ChatGPT?
* The main investor of ChatGPT happens to be...?
> You have exceeded a secondary rate limit.
Edit and self-answer:
> In addition to primary rate limits, GitHub enforces secondary rate limits
(…)
> These secondary rate limits are subject to change without notice. You may also encounter a secondary rate limit for undisclosed reasons.
https://docs.github.com/en/rest/using-the-rest-api/rate-limi...
This rather significantly changes the place of github hosted code in the ecosystem.
I understand it is probably a response to the ill-behaved decentralized bot-nets doing mass scraping with cloaked user-agents (that everyone assumes is AI-related, but I think it's all just speculation and it's quite mysterious) -- which is affecting most of us.
The mystery bot net(s) are kind of destroying the open web, by the counter-measures being chosen.
The SEC has no criminal prosecution powers; all they can do in that regard is write a note asking the DOJ to pretty-please look into something. The only way to get a federal (civilian) criminal conviction is to have the DOJ go after you.
Insider trading charges are to high-flying executive-types as "Based on my training and experience, I detected the distinct odor of cannabis on the suspect's person" is to folks who are committing the crime of walking while black near an officer with something to prove.
Seriously, these regs are very, very vague and open-ended, and a ton of deference is given to the SEC.
The US cannot even stop NSO to hack the system with spyware and Israel is a political ally.
Also, neither the new nor the old rate limits are mentioned.
At this point knowledge seems to be gathered and replicated to great effect and sites that either want to monetize their content OR prevent bot traffic wasting resources seem to have one easy option.
AI not caching things is a real issue. Sites being difficult TO cache / failing the 'wget mirror test' is the other side of the issue.
since when actor that want gather your entire data respect things like this??? how can you enforce such things with just "please don't crawl this directory thanks"
A van, some balaclavas and 4 people with big sticks.
All of public github is only 21TB. Can't they just host that on a dumb cache and let the bots crawl to their heart's content?
Quoting from https://archiveprogram.github.com/arctic-vault/
> every *active* public GitHub repository. [active meaning any] repo with any commits between [2019-11-13 and 2020-02-02 ...] The snapshot consists of the HEAD of the default branch of each repository, minus any binaries larger than 100KB in size
So no files larger than 100KB, no commit history, no issues or PR data, no other git metadata.
If we look at this blog post from 2022, the number we get is 18.6 PB for just git data https://github.blog/engineering/architecture-optimization/sc...
Admittedly, that includes private repositories too, and there's no public number for just public repositories, but I'm certain it's at least a noticeable fraction of that ~19PB.
About $ 250 000 for 1000 HDDs and you get all the data. Meaning private persons such as top FAANG engineers could get a copy of the whole data after 2-3 years job. For companies dealing with AI such raw price is nothing at all.
I put this on a web site once, and didn't notice for a month that someone was making queries at a frantic rate. It had zero impact on other traffic.
I'm not even sure what that would look like for a huge service like GitHub. Where do you hold those many thousands of concurrent http connections and their pending request queues in a way that you can make decisions on them while making more operational sense than a simple rate limit?
A lot of things would be easy if it were viable to have one big all-knowing giga load balancer.
I remember Rap Genius wrote a blog post whining that Heroku did random routing to their dynos instead of routing to the dyno with the shortest request queue. As opposed to just making an all-knowing infiniscaling giga load balancer that knows everything about the system.
Sure, some solutions tend to be more efficient than others, but those typically boil down to implementation details rather than fundamental limitations in system design.
Holding open an idle HTTP connection is cheap today. That's the use case for "async". Servicing a Github fetch is much more expensive.
It's a good mitigation when you have legit requests, and some requestors create far more load than others. If Github used fair queuing for authenticated requests, heavy users would see slower response, but single requests would be serviced quickly. That tends to discourage overdoing it.
Still, if "git clone" stops working, we're going to need a Github alternative.
But this feels like a further attempt to create a walled garden around 'our' source code. I say our, but the first push to KYC, asking for phone numbers, was enough for me to delete all and close my account. Being on the outside, it feels like those walls get taller every month. I often see an interesting project mentioned on HN and clone the repo, but more and more times that is failing. Trying to browse online is now limited, and they recently disabled search without an account.
For such a critical piece of worldwide technology infrastructure, maybe it would be better run by a not-for-profit independent foundation. I guess, since it is just git, anyone could start this, and migration would be easy.
However, a lot of people think Github is the only option, and it benefits from network effects.
Non-profit alternatives suffer from a lack of marketing and deal making. True of most things these days.
Sad but true. I’m trying to promote these whenever I can.
Still great for some applications and developers, but not all.
I forget because I don't use them, but weren't there some products meant as dependency package repositories that github had introduced at some point, for some platforms? Does this apply to them? (I would hope not unless they want to kill them?)
This rather enormously changes github's potential place in ecosystems.
What with the poor announcement/rollout -- also unusual for what we expect of github, if they had realized how much this effects -- I wonder if this was an "emergency" thing not fully thought out in response to the crazy decentralized bot deluge we've all been dealing with. I wonder if they will reconsider and come up with another solution -- this one and the way it was rolled out do not really match the ingenuity and competence we usually count on from github.
I think it will hurt github's reputation more than they realize if they don't provide a lot more context, with suggested workarounds for various use cases, and/or a rollback. This is actually quite an impactful change, in a way that the subtle rollout seems to suggest they didn't realize?
Why can't people harden their software with guards? Proper DDoS protection? Better caching? Rewrite the hot paths in C, Rust, Zig, Go, Haskell etc.?
It strikes me as very odd, the atmosphere of these threads. So much doom and gloom. If my site was hit by an LLM scraper I'd be like "oh, it's on!", a big smile, and I'll get to work right away. And I'll have that work approved because I'll use the occasion to convince the executives of the need. And I'll have tons of fun.
Can somebody offer a take on why are we, the forefront of the tech sector, just surrendering almost without a single shot?
Especially when the car's plastic suspension is costing them extra money? I don't get it here, for real. I would think that selfish capitalistic interests would have them come around at one point! (Clarification: invest $5M for a year before the whole thing starts costing you extra $30M a year, for example.)
And don't even get me started on the fact that GitHub is written in one of the most hardware-inefficient web frameworks (Rails). I mean OK, Rails is absolutely great for many things because most people's business will never scale as much and as such the initial increased developer velocity is an unquestionable one-sided win. I get that and I stopped hating Rails long time ago (even though I dislike it; but I do recognize where it's a solid and even preferred choice). But I've made a lot of money from trying to modernize and maintain Rails monoliths; it's just not suited for one scale and on -- without paying for extremely expensive consultants that is. It's like, everything can be made to work but it does start costing exponentially more from one scale and further up.
And yet nobody at GitHub figures "Maybe it's time we rewrite some of the hot paths?" or just "Make more aggressive caching even if it means some users see data outdated by 30 seconds or so"? Nothing at all?
Sorry, I am kind of ranting and not really saying anything to you per se. I am just very puzzled about how paralyzed GitHub seems under Microsoft.
However, execs I know lease cars, not buy them, for that exact reason. You don't care if the suspension is made of plastic, if it's a subscription model. The metaphor very much falls apart but I had a point somewhere.
Well, there's a solution for that as well: execs should be liable for a number of years even after they move on. Para-troopers that swoop in, reap rewards they never worked for, and parachute away with the gold is something that must be legislated against, hard and aggressive. People should go to jail.
But... these are the people who make the rules so not happening, right?
Oh well, better luck to us in the next life I guess.
gnabgib•2mo ago
5000 req/hour for authenticated - personal
15000 req/hour for authenticated - enterprise org
According to https://docs.github.com/en/rest/using-the-rest-api/rate-limi...
I bump into this just browsing a repo's code (unauth).. seems like it's one of the side effects of the AI rush.
mijoharas•1mo ago
I thought I was just misreading it and failing to see where they stated what the new rate limits were, since that's what anyone would care about when reading it.
1oooqooq•1mo ago
they already have all your code. they've won.
naikrovek•1mo ago
If people training LLMs are excessively scraping GitHub, it is well within GitHub's purview to limit that activity. It's their site and it's up to them to make sure that it stays available. If that means that they curtail the activity of abusive users, then of course they're going to do that.
1oooqooq•1mo ago
why do you think before they blocked non logged in users from even searching? they need your data and they are getting it exactly in their terms. because as I've said, they have already won.
sebmellen•1mo ago
naikrovek•1mo ago
I see no MS or GitHub specific extension, here. Copilot exists, and so do many other tools. Copilot can use lots of non-Microsoft models, too, including models from non-Microsoft companies. You can also get git repository hosting from other companies. You can even do it yourself.
So, explain yourself. What has been embraced, extended, and extinguished? Be specific. No “vibes”. Cite your sources or admit you have none. I see no extending unique to MS and I see no extinguishing. So explain yourself.
pdimitar•1mo ago
naikrovek•1mo ago
1oooqooq•1mo ago
Microsoft have a more successful social network for programmers than HN or google circles (heh) ever dreamed.
the arguments had already dropped access to the information by scrapers, since they own the scrapers and all... why did you brought it back as the main argument? they hijacked what could have been a community hub and turned into a walled garden to sell a few enterprise licenses.
naikrovek•1mo ago
I don't know. The limits in the comment that you're replying to are unchanged from where they were a year ago.
So far I don't see anything that has changed, and without an explanation from GitHub I don't think we'll know for sure what has changed.
blinker21•1mo ago
I really wish github would stop logging me out.
Novosell•1mo ago
1oooqooq•1mo ago
GH now uses the publisher business model, and as such, they lose money when you're logged out. same reason why google, fb, etc will not ask you for a password for decades.
zarzavat•1mo ago
If they would let me stay logged in for a year then I wouldn't care so much.
tux3•1mo ago
zarzavat•1mo ago
Though GitHub did force me to use 2fa earlier because they said I have a "popular repo", so perhaps my account is considered high risk. Or maybe it's triggered by travelling and changing IP locations? I have no clue, but it's annoying to have to 2fa more than once in a blue moon.
dghlsakjg•1mo ago
usernamed7•1mo ago
out-of-ideas•1mo ago
thanks github for the worse experience
rendaw•1mo ago
ikiris•1mo ago
notatoad•1mo ago
zarzavat•1mo ago
A normal rate limit to separate humans and bots would be something like 60 per minute. So it's about an order of magnitude too low.
mjevans•1mo ago
Something on the order of 6 seconds a page doesn't sound TOO out of human viewing range depending on how quickly things load and how fast rejects are identified.
I could see ~10 pages / min which is 600 pages / hour. I could also see the argument that a human would get tired at that rate and something closer to 200-300 / hr is reasonable.
hansvm•1mo ago
jakebasile•1mo ago