frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Show HN: I wrote a BitTorrent Client from scratch

https://github.com/piyushgupta53/go-torrent-client
50•piyushgupta53•1h ago•7 comments

Jemalloc Postmortem

https://jasone.github.io/2025/06/12/jemalloc-postmortem/
297•jasone•4h ago•72 comments

Frequent reauth doesn't make you more secure

https://tailscale.com/blog/frequent-reath-security
757•ingve•11h ago•330 comments

Rendering Crispy Text on the GPU

https://osor.io/text
103•ibobev•3h ago•26 comments

Slow and steady, this poem will win your heart

https://www.nytimes.com/interactive/2025/06/12/books/kay-ryan-turtle-poem.html
8•mrholme•58m ago•6 comments

Zero-Shot Forecasting: Our Search for a Time-Series Foundation Model

https://www.parseable.com/blog/zero-shot-forecasting
14•tiwarinitish86•1h ago•1 comments

A receipt printer cured my procrastination

https://www.laurieherault.com/articles/a-thermal-receipt-printer-cured-my-procrastination
843•laurieherault•18h ago•456 comments

A Dark Adtech Empire Fed by Fake CAPTCHAs

https://krebsonsecurity.com/2025/06/inside-a-dark-adtech-empire-fed-by-fake-captchas/
116•todsacerdoti•7h ago•28 comments

iPhone 11 emulation done in QEMU

https://github.com/ChefKissInc/QEMUAppleSilicon
260•71bw•15h ago•21 comments

Three Algorithms for YSH Syntax Highlighting

https://github.com/oils-for-unix/oils.vim/blob/main/doc/algorithms.md
14•todsacerdoti•3h ago•3 comments

Show HN: Tritium – The Legal IDE in Rust

https://tritium.legal/preview
179•piker•18h ago•85 comments

Major sugar substitute found to impair brain blood vessel cell function

https://medicalxpress.com/news/2025-06-major-sugar-substitute-impair-brain.html
52•wglb•6h ago•17 comments

Urban Design and Adaptive Reuse in North Korea, Japan, and Singapore

https://www.governance.fyi/p/adaptive-reuse-across-asia-singapores
17•daveland•4h ago•4 comments

Show HN: McWig – A modal, Vim-like text editor written in Go

https://github.com/firstrow/mcwig
97•andrew_bbb•16h ago•8 comments

Worldwide power grid with glass insulated HVDC cables

https://omattos.com/2025/06/12/glass-hvdc-cables.html
63•londons_explore•10h ago•40 comments

Maximizing Battery Storage Profits via High-Frequency Intraday Trading

https://arxiv.org/abs/2504.06932
228•doener•20h ago•215 comments

Show HN: Tool-Assisted Speedrunning the Boring Parts of Animal Crossing (GCN)

https://github.com/hunterirving/pico-crossing
82•hunterirving•16h ago•11 comments

The curse of Toumaï: an ancient skull and a bitter feud over humanity's origins

https://www.theguardian.com/science/2025/may/27/the-curse-of-toumai-ancient-skull-disputed-femur-feud-humanity-origins
45•benbreen•8h ago•17 comments

Rust compiler performance

https://kobzol.github.io/rust/rustc/2025/06/09/why-doesnt-rust-care-more-about-compiler-performance.html
188•mellosouls•2d ago•137 comments

Why does my ripped CD have messed up track names? And why is one track missing?

https://www.akpain.net/blog/inside-a-cd/
110•surprisetalk•15h ago•111 comments

Solving LinkedIn Queens with SMT

https://buttondown.com/hillelwayne/archive/solving-linkedin-queens-with-smt/
100•azhenley•13h ago•33 comments

Chatterbox TTS

https://github.com/resemble-ai/chatterbox
595•pinter69•1d ago•177 comments

Roundtable (YC S23) Is Hiring a President / CRO

https://www.ycombinator.com/companies/roundtable/jobs/wmPTI9F-president-cro-founding
1•timshell•9h ago

Microsoft Office migration from Source Depot to Git

https://danielsada.tech/blog/carreer-part-7-how-office-moved-to-git-and-i-loved-devex/
317•dshacker•1d ago•251 comments

First thoughts on o3 pro

https://www.latent.space/p/o3-pro
136•aratahikaru5•2d ago•112 comments

Dancing brainwaves: How sound reshapes your brain networks in real time

https://www.sciencedaily.com/releases/2025/06/250602155001.htm
147•lentoutcry•4d ago•40 comments

Helion: A modern fast paced Doom FPS engine in C#

https://github.com/Helion-Engine/Helion
143•klaussilveira•2d ago•54 comments

Quantum Computation Lecture Notes (2022)

https://math.mit.edu/~shor/435-LN/
123•ibobev•3d ago•43 comments

US-backed Israeli company's spyware used to target European journalists

https://apnews.com/article/spyware-italy-paragon-meloni-pegasus-f36dd32106f44398ee24001317ccf2bb
542•01-_-•13h ago•257 comments

The Case for Software Craftsmanship in the Era of Vibes

https://zed.dev/blog/software-craftsmanship-in-the-era-of-vibes
94•Bogdanp•6h ago•33 comments
Open in hackernews

Congratulations on creating the one billionth repository on GitHub

https://github.com/AasishPokhrel/shit/issues/1
581•petercooper•1d ago

Comments

bitpush•1d ago
curl -s https://api.github.com/repositories/1000000000 { "id": 1000000000, "node_id": "R_kgDOO5rKAA", "name": "shit", "full_name": "AasishPokhrel/shit" }
samgranieri•1d ago
Well shit!
mistersquid•1d ago
Is this what folks mean by enshittification?
arcanemachiner•1d ago
For anyone wondering, the name of the repo is literally "shit".
Cyphase•1d ago
I'm wondering if AasishPokhrel created this repo for the purpose of being the billionth.
joshdavham•1d ago
I highly doubt it, but that does sound possible.
paxys•1d ago
It's pretty easy to game this. Just keep creating repos till you hit # one billion and remove the old ones. Their API makes it trivial. The only issue will be rate limits, and other people simultaneously creating repos, so it's a matter of luck.
recursive•1d ago
I don't believe they will renumber the old ones. Also, it can't be trivial, since two people can try this, and only one can win.
handfuloflight•1d ago
There is always one trillion to look forward to!
hu3•1d ago
bogus aí agents stuck in loops will get us there soon enough
Macha•1d ago
The one who lost doesn't get discussed in this thread.
recursive•1d ago
Yes, but that doesn't make it trivial.
GodelNumbering•1d ago
There was a guy who got fired from Meta for creating excessive automated diffs in pursuit of a certain magic number
paxys•1d ago
I hope PR #80085 was worth it.
fancyswimtime•1d ago
69 doesn't seem excessive
nithssh•22h ago
Sounds interesting, is there anything online about this?
maniacalhack0r•1d ago
AasishPokhrel made 2 repos yday - shit and yep. no activity between may 17th and june 10th.

i have no idea if its possible to calculate the rate at which repos are being created and time your repo creation to hit vanity numbers

kylehotchkiss•1d ago
I think he’s in university for software development in Nepal, and it’s really touching that a milestone could go so deeply into the world. Hopefully he has a big spot for this on his resume and can find a great career in development!
netsharc•1d ago
I don't get why this needs a big spot in his resume, and why it should lead to a great career. A company/hiring manager that thinks being lucky to hit a magic number on some system has any relevance to work, I'd rate as very insane...
mkagenius•1d ago
I will be really sus of someone's intelligence if they mentioned this as an achievement. It's fine as a joke, though.
notfed•1d ago
I find a bit of humor in the fact that this is completely unrequited attention. There's even a chance the guy is oblivious.
fHr•1d ago
repo named shit LMFAO nice
joshdavham•1d ago
This is actually incredible.
umanwizard•1d ago
On a serious note, I'm a bit surprised that GitHub makes it trivial to compute the rate at which new repositories are created. Isn't that kind of information usually a corporate secret?
raincole•1d ago
Is there any reason for GitHub to hide this information though? How could it be used against them?

(I understand many companies default to not expose any information unless forced otherwise.)

toast0•1d ago
The rate of creation is like meh, but being able to enumerate all of the repos might be problematic, following new repos and scanning them for leaked credentials could be a negative... but github may have a feed of new repos anyway?

Also, having a sequence implies at least a global lock on that sequence during repo creation. Repo creation could otherwise be a scoped lock. OTOH, it's not necessarily handled that way --- they could hand out ranges of sequences to different servers/regions and the repo id may not be actually sequential.

colechristensen•1d ago
>following new repos and scanning them for leaked credentials could be a negative

People do this. GitHub started doing it too so now you get a nice email from them first instead of another kind of surprise.

mdaniel•1d ago
Email, bleh, I'm sure I'm not the only one who basically /dev/null's emails from github about pearl-clutching "security" but I wanted to point out that for quite a few providers they actually have an integration to revoke them if found in a public repo, which I think is way more handy

https://docs.github.com/en/code-security/secret-scanning/sec...

and the list is way bigger than I recalled: https://docs.github.com/en/code-security/secret-scanning/int...

colechristensen•1d ago
You can turn those GitHub security warnings off if you don't want them.

>quite a few providers they actually have an integration to revoke them if found in a public repo, which I think is way more handy

Yes I've also gotten an email from Amazon saying they revoked a key someone inadvertently leaked (but so long ago I only remember that it happened). I read my AWS emails at least.

4hg4ufxhy•1d ago
What would be the issue with global lock? I think repo creation is a very rare event when measured in computer time.
progval•21h ago
> but github may have a feed of new repos anyway?

Yes: https://docs.github.com/en/rest/repos/repos?apiVersion=2022-... (you can filter to only show repositories created since a given date).

tough•11h ago
and using their obscure graphql api, you can do the same for -new commits- across any repos.

they have some secret leaking infra for enterprise

xboxnolifes•1d ago
Companies usually hide this type of information so competitors have a harder time determining if they are growing/shrinking/neutral.
dietr1ch•1d ago
And engineers thinking of scale would usually try to steer away from a sequential id because of self inflicted global locking and hot spots.
blitzar•22h ago
Companies usually hide this type of information so VC's / stonk investors will give them more money.
cheschire•1d ago
When your moat is a billion wide, you tend to walk around in your underwear a bit more I guess.
90s_dev•1d ago
Excellent Diogenes quote reference.
sebastiennight•22h ago
Do you mean a specific quote here? I couldn't find the reference.
90s_dev•18h ago
The answer to that question is in the eye of the beholder or something idk
NooneAtAll3•1d ago
unless you're youtube?
paulddraper•1d ago
You can see the rate of creation of new users too.

Which is arguably even more interesting…

beaugunderson•1d ago
and you can find the latest ID incredibly quickly using binary search! (I used to track a bunch of websites' growth this way)
jaynate•1d ago
The repo three comma club
badc0ffee•1d ago
Tres commas
cheschire•1d ago
Commits that go like \o/ this.
codethief•1d ago
References for everyone else: https://m.youtube.com/watch?v=wGy5SGTuAGI @ 4:28-6:30
morkalork•1d ago
I still that line whenever I see a car that has doors that go like o/ and not like o_
opello•1d ago
but, tres comas; only one m :)
adsteel_•1d ago
I only realized last week that Tres Comas tequila is just a bottle of 1800 Reposado with a different label
mkagenius•1d ago
Tastes best when put on a keyboard
9dev•1d ago
Sigh. You can't make that shit up. I'm sure there's a witty metaphor in there, somewhere…
8organicbits•1d ago
While we are doing cool GitHub repo IDs, the first is here:

https://api.github.com/repositories/1

https://github.com/mojombo/grit

mkagenius•1d ago
The millionth one is "vim-scripts/nexus.vim"

The 1000th is missing.

dgellow•20h ago
And the first commit: https://github.com/mojombo/grit/commit/634396b2f541a9f2d58b0...
Aachen•1d ago
Makes me wonder how many repositories exist in general, from all the local Forgejo and Gitlab servers. Heck, include Subversion and Mercurial and git's other friends (and foes!)

Did anyone make a search engine for these yet, so we'd be able to get an estimate by searching for the word "a" or so?

(This always seemed like the big upside of centralised GitHub to me: people can actually find your code. I've been thinking of making a search since MS bought GH but didn't think I could do the marketing aspects and so it would be a waste of effort and I never did it. Recently I was considering whether this would be worth revisiting, with the various projects I'm putting on Codeberg, but maybe someone beat me to the punch)

mdaniel•1d ago
Well, based on the API enumeration mentioned in sibling comments, surely one doesn't have to estimate

https://docs.gitlab.com/api/projects/#list-all-projects (for dumb reasons it seems GL calls them Projects, not Repositories)

https://codeberg.org/api/swagger#/repository/repoGetByID (that was linked to by the Forgejo.org site, so presumably it's the same for it and Codeberg) and its friend https://gitea.com/api/swagger#/repository/repoGetByID

Heptapod is a "friendly fork" of GitLab CE so its API works the same: https://heptapod.net/pages/faq#api-hgrc

and then I'd guess one would need to index the per-project GitLab instances: Gnome, GNU (if they ever open theirs back up), whatever's going on with Savannah, probably Sourceforge, maybe sourcehut (assuming he doesn't have some political reason to block you), etc

If I won the lottery, I'd probably bankroll a sourcegraph instance (from back when they were Apache) across everything I could get my hands upon, and donate snapshots of it to the Internet Archive

progval•21h ago
At Software Heritage, we listed 380M public repositories, 280M of which are on Github: https://archive.softwareheritage.org/

Repository search is pretty limited so far: only full-text search on URLs or in a small list of metadata files like package.json.

90s_dev•1d ago
This is either staged,

or incredible commentary on most github repos,

having no purpose, never being realized, and even having given up dreaming.

kristopolous•1d ago
Honestly I think the plurality are by students, maybe even the majority.

When I was a student I had to manually set up CVS pserver and CVSWeb to collaborate with other students on assignments.

This is a bit easier by at least a few orders of magnitude.

MindTheAbstract•1d ago
I really hope it's the latter, that would be quite brilliant
90s_dev•1d ago
I don't mean intentionally.

I mean maybe he made that repo because he'd given up on his coding dreams. Almost.

That would be some interesting accidental meta commentary on the state of things.

extraduder_ire•1d ago
I think, by design, every commit in git is staged.
90s_dev•1d ago
Wow.
caleblloyd•1d ago
Awesome! Only a little over a billion more to go before GitHub’s very own OpenAPI Spec can start overflowing int32 on repositories too, just like it already does for workflows run IDs!

https://github.com/github/rest-api-description/issues/4511

bartread•1d ago
The company where I did my stint as CTO I turned up, noticed they were using 32-bit integers as primary keys on one of their key tables that already had 1.3 billion rows and, at the rate they were adding them, would overflow on primary key values within months… so we ran a fairly urgent project to upgrade the IDs to 64-bit to avoid the total meltdown that would have ensued otherwise.
hobs•1d ago
heh, that's happened at at least 5 companies I have worked at - go to check the database, find - currency as floats, hilarious indexes, integers gonna overflow, gigantic types with nothing in them.
rudasn•1d ago
I bet you haven't seen indeces on decimals though! Fun times :)
azophy_2•1d ago
Just curious as someone with limited experience on this. Whats wrong with it? decimal is consistent & predictable (compared than float), so it shouldn't be that big of a deal right? CMIIW
rudasn•1d ago
Yeah, not a big deal but completely useless nonetheless as you would never really query your table for just the one decimal column (eg the price) but a couple more (eg the category and the price) so you'd have a multi-column index on those columns. The index on just the price column never gets used.
cwbriscoe•23h ago
What if you wanted to select "top 100 most expensive products" or number of products between $0.01 and $10, $10.01 and $100, $100.01 and $1000? Sure you could do a full table scan on your products table on both queries but an index on price would speed both queries up a lot if you have a lot of products. Of course you have to determine if the index would be used enough to make up for the extra time on index update when the price changes or products are added or deleted.
erulabs•22h ago
Cheap solution, sure, add an index. But you're asking an OLAP question question of an OLTP system. Questions like that are best asked at least of an out-of-production read replica or better an analytics db.
robertlagrant•21h ago
I don't really understand this - what is an out of production read replica? Why wouldn't it just go to a production read replica?

And what is an "analytics db" in this context?

gkalin59•16h ago
You stream CDC events to have a 1 to 1 read replica in something like Snowflake/Databricks where you can run all kinds of OLAP workflows on this analytics db replica.
cwbriscoe•15h ago
In the real world, people want cheap solutions and they want it yesterday.
hobs•3h ago
They'd certainly need decimals in the first place, but yeah I have seen indexes on every column, multiple times, I have seen indexes such that the sum of their size are 26 times the size of the original data... that's actively being written to.
gchamonlive•1d ago
What are the challenges of such projects? How many people are usually involved? Does it incur downtimes or significant technical challenges for either the infrastructure or the codebase?
jiggawatts•1d ago
Not the original commenter, but I've read through half a dozen post-mortems about this kind of thing. The answer is: yes. There's challenges and sometimes downtime and/or breaking changes are inevitable.

For one, if your IDs are approaching the 2^31 signed integer limit, then by definition, you have nearly two billion rows, which is a very big DB table! There are only a handful of systems that can handle any kind of change to that volume of data quickly. Everything you do to it will either need hours of downtime or careful orchestration of incremental/rolling changes. This issue tends to manifest first on the "biggest" and hence most important table in the business such as "sales entries" or "user comments". It's never some peripheral thing that nobody cares about.

Second, if you're using small integer IDs, that decision was probably motivated in part because you're using those integers as foreign keys and for making your secondary indexes more efficient. GUIDs are "simpler" in some ways but need 4x the data storage (assuming you're using a clustered database like MySQL or SQL Server). Even just the change from 32-bits to 64-bits doubles the size of the storage in a lot of places. For 2 billion rows, this is 8 GB more data minimum, but is almost certainly north of 100 GB across all tables and indexes.

Third, many database engines will refuse to establish foreign key constraints if the types don't match. This can force big-bang changes or very complex duplication of data during the migration phase.

Fourth, this is a breaking change to all of your APIs, both internal and external. Every ORM, REST endpoint, etc... will have to be updated with a new major version. There's a chance that all of your analytics, ETL jobs, etc... will also need to be touched.

Fun times.

lcnPylGDnU4H9OF•1d ago
> For one, if your IDs are approaching the 2^31 signed integer limit, then by definition, you have nearly two billion rows

Just wanted to nitpick this; this is not actually definitively true. A failed insert in some systems will increment the counter and deleting rows usually does not allow the deleted ID to be re-used (new inserts use the current counter). Of course, that is beside the point: the typical case of a table approaching this limit is a very large table.

lmm•1d ago
It's actually fairly common to see this problem crop up in systems that are using a database table as a queue (which is a bad idea for many reasons, but people still do it) in which case the number of live rows in the table can be fairly small.
jamwil•1d ago
If a SQLServer instance is killed unceremoniously it adds 1000 to the pk increment.
jiggawatts•17h ago
I'm trying not to imagine the poor SQL Server that has crashed one or two million times and hence pushed the ID values into the billions!
jamwil•15h ago
haha—somewhere out there it’s crashing right now. Keep it in your thoughts.
bartread•1d ago
Changing the type of the column is no big deal per se, except on a massive table it’s a non-trivial operation, BUT you also have to change the type in everything that touches it, everywhere it’s assigned or copied, everywhere it’s sent over the wire and deserialized where assumptions might be made, any tests, and on, and on. And god help you if you’ve got stuff like int.MaxValue having a special meaning (we didn’t in this context, fortunately).

Our hosting environment at that time was a data centre so we were limited on storage, which complicated matters a bit. Like ideally you’d create a copy of the table but with a wider PK column and write to both tables, then migrate your reads, etc., but we couldn’t do that because the table was massive and we didn’t have enough space. Procuring more drives was possible but took sometimes weeks - no just dragging a slider in your cloud portal. And then of course you’d have to schedule a maintenance window for somebody to plug it in. It was absolutely archaic, especially when you consider this was late 2017/early 2018.

You need multiple environments so you can do thorough testing, which we barely had at that point, and because every major system component was impacted, we had to redeploy our entire platform. Also, because it was the PK column affected, we couldn’t do any kind of staged migration or rollback without the project becoming much more complex and taking a lot longer - time we didn’t have due to the rate at which we were consuming 32-bit integer values.

In the end it went off without a hitch, but pushing it live was still a bit of a white knuckle moment.

robertlagrant•21h ago
Well done. Unsung heroes keeping it all going, and unsung villains who chose int32 in the first place long gone :-)
ipaddr•9h ago
This comment can be reused when int64 is forced to change into int128 or int255 in the future.
tengbretson•1d ago
If you've written your services in JavaScript, going from i32 to i64 means your driver is probably going return it as a string (or a BigInt or some custom Decimal type), rather than the IEEE754 number you were getting before. This means you now need change your interfaces (both internal and public-facing) to a string or some other safely serializable representation. And if you are going to go through all that trouble, you may as well take the opportunity to just switch to some uuid strategy anyway.

The alternative is that you can monkey-patch the database driver to parse the i64 id as an IEEE754 number anyway and deal with this problem later when you overflow the JavaScript max safe integer size (2^53), except when that happens it will manifest in some really wacky ways, rather than the db just refusing to insert a new row.

dietr1ch•1d ago
Maybe you are better off moving to UUIDs then? It seems that there's packages to make handling them easier, but you'll still need a tiny hack to map old i32 Ids to some UUID.
roberttod•1d ago
I remember such a project, and due to our large and aging TypeScript frontend projects it would have added a couple of weeks to adjust all the types affected. All IDs in many places deep in code caused thousands of errors from the mismatch which was a nightmare. I can't remember exactly why it was so tough to go through them all, but we were under intense time pressure.

To speed things up we decided to correct the ID types for the server response, which was key since they were generated from protobuf. But we kept everything using number type IDs everywhere else, even though they would actually be strings, which would not cause many issues because there ain't much reason to be doing numeric operations on an ID, except the odd sort function.

I remember the smirk on my face when I suggested it to my colleague and at the time we knew it was what made sense. It must have been one of the dumbest solutions I've ever thought of, but it allowed us to switch the type eventually to string as we changed code, instead of converting the entire repos at once. Such a Javascript memory that one :)

cyberax•1d ago
The same story happened inside Amazon.
darkwater•23h ago
Lived that with a MySQL table. The best thing is that the table was eventually dismissed (long after the migration) because the whole data model around it was basically wrong.
neomantra•20h ago
A couple weeks ago there was some Lua community issues because LuaRocks surpassed 65,535 packages.

There was a conflict between this and the LuaRocks implementation under LuaJIT [1] [2], inflicting pain on a narrow set of users as their CI/CD pipelines and personal workflows failed.

It was resolved pretty quick, but interesting!

[1] https://github.com/luarocks/luarocks/issues/1797

[2] https://github.com/openresty/docker-openresty/issues/276

Drblessing•1d ago
Imagine if this was private. We would've lost out on this glorious moment.
jonplackett•1d ago
Holy shit, that’s amazing.
Aachen•1d ago
Reminds me of the 100 millionth OpenStreetMap changeset (commit). A few people, myself included, were casually trying for it but in the end it went to someone who wasn't trying and just busy mapping Africa! Much more wholesome, seeing it with hindsight. This person was also previously nominated for an OSM award. I guess it helps that openstreetmap doesn't really allow for creating crap, because it's all live in production, and that's how the Nth commit is way more likely to be someone's random whim? Either way, a fun achievement for Github :)

In case anyone cares to read more about the OSM milestone, the official blog entry: https://blog.openstreetmap.org/2021/02/25/100-million-edits-... My write-up of changeset activity around the event: https://www.openstreetmap.org/user/LucGommans/diary/395954

chneu•1d ago
Your kind of comment is exactly why HN still rules. What a fun story. Thanks for sharing
Aachen•1d ago
Aww, thanks! I wasn't sure if I should go off-topic this much so I'm happy to hear this!
ash_091•1d ago
A friend of mine spent an entire workday figuring out how to ensure he created the millionth ticket in our help desk. Not sure how he cracked it in the end but we had a little team party to celebrate the achievement.

This was probably fifteen years ago. I feel like working in tech was more fun back then.

darkwater•23h ago
I wonder which is the latest ID today then...
deruta•15h ago
I was involved in the 99,999th and the 100,000th one in my FQA days.

We were being onboarded, they were just for demo and were promptly deleted. No one cared about the Cool Numbers.

jpsouth•14h ago
In my first job I raised JIRA-1337 and was pretty chuffed with myself, being on a team of young, nerdy gamer type folk. My manager not so much, they wanted to raise it (for a meme?) but I was doing actual work rather than watching numbers go up so that was quite satisfying when it was a genuine defect.
lbeckman314•1d ago
> https://ohshitgit.com
hyperhopper•1d ago
https://stevelosh.com/blog/2013/04/git-koans/
zaps•1d ago
Respect to GitHub for committing to the bit
Lammy•1d ago
check 'em https://knowyourmeme.com/memes/dubs-guy-check-em
CGamesPlay•1d ago
Probably created via a script that just repeatedly checked https://api.github.com/repositories/999999999 until it showed up, and then created a new repository. Since repositories can be modified, could have even given it some buffer and created a bunch of repos, just delete the ones that don't get the right number. [append] Looking at the author's other repo created yesterday, I'm betting "yep" was supposed to be the magic number, and "shit" was an admission of missing the mark.

Does anyone remember D666666 from Facebook? It was a massive codemod; the author used a technique similar to this one to get that particular number.

topherPedersen•1d ago
You solved the mystery!
notfed•1d ago
Or...not. Why are you assuming this guy purposely grabbed the repo?
CGamesPlay•1d ago
Mostly just to share an approach to solving the "problem" of getting memorable numbers from a pool of sequential IDs.

But given that this user doesn't have activity very often, and created two repositories as the number was getting close, it feels likely that it was deliberate. I could be wrong!

JKCalhoun•1d ago
I wish I were still at Apple. Probably most people here know that Apple uses an internal tool called "Radar" since, forever. Each "Radar" has an ID (bug #) associated with it.

Radars that were bug #1,000,000, etc. were kind of special. Unless someone screwed up (and let down the whole team) they were usually faux-Radars with lots of inside jokes, etc.

Pulling up one was enough since the Radar could reference other Radars ... and generally you would go down the rabbit hole at that point enjoying the ride.

I was a dumbass not to capture (heck, even print) a few of those when I had the opportunity.

xmprt•1d ago
> I was a dumbass not to capture (heck, even print) a few of those when I had the opportunity.

On the other hand, given how Apple deals with confidential data, you probably wouldn't want to be caught exfiltrating internal documents however benign they are.

msarnoff•1d ago
#SnakesOnARadar
bjackman•23h ago
At Google, the monorepo VCS has monotonic IDs like this for changes. Unfortunately a few years ago when approaching some round number, the system was DOS'd by people running scripts trying to snag the ID. So now it skips IDs in the vicinity of big round numbers :(

I think there's probably a lesson in there about schema design...

almosthere•1d ago
facepalm
bigbuppo•1d ago
Aww, he renamed it from shit to historic-repo.
Sohcahtoa82•1d ago
The repo seems to have gotten renamed and now redirects to https://github.com/AasishPokhrel/repository/

Lame. :-(

Sohcahtoa82•13h ago
It was renamed back! :-D
nojs•1d ago
Haha, he’s renamed it from “shit” to “repository” which makes the comments less funny
carlhjerpe•1d ago
Readme still has shit in it
jonasdegendt•19h ago
Check 'em! Sick dubs :^)
ChoGGi•17h ago
That's a perfect one billion repo.
hoppp•15h ago
Nerd humor is funny as shit