Perhaps Microsoft offered to pick up the tab that Google has been paying, but is now imperiled, or at least lend some sort of financial support, and Firefox cares more about paying their bills than open source
Bad PRs all around, with just a constant stream of drive by "why no merge?!?!?!" comments.
Think you might be on something, with the incoming end of Google cash flow, Firefox may be in discussion with bing and that could be part of the agreement, use Microsoft server.
They should restructure instead, hire people who actually want to work on software and not use corporation and foundation around it as platform for their... peculiar "endeavours". But I doubt that's gonna happen - flow of Google cash and from all those naive people who think supporting Mozilla directly contributes to Firefox is too good it seems. But then it's understandable they do this - money from Google tap can get twisted.
Even before this Mozilla almost certainly used hundreds of closed source tools, including things like Slack, Excel, Anaplan, Workday, etc.
issues are stored in git bug and automatically synced. Github is the only viable option, but you can keep the others as mirrors when github chooses to strike you.
That was the point of the (obviously ill-received) joke.
No serious engineer will read that line and think, "wow, how malicious of mozilla, they just made the move to close all bug reports at once".
Never underestimate the cynicism of HN commenters.
Sure, there would be local copies everywhere, but for a distribution version control system, it's pretty centralized at GitHub
Everything else... as the original comment said, is pretty centralized for a decentralized system.
Maybe if Git had native support for PRs and issues this wouldn't have happened. (And yes I'm aware of git send-email etc.)
It's often useful. But sometimes you want to use other tools, like firing up your editor to explore.
Note we’re talking about the GitHub UI mostly. Pulling and merging a remote branch is a basic git operation, almost a primitive.
Edit: ripgrep was just a test
More: https://github.blog/engineering/the-technology-behind-github...
Not only results are incomplete but it seems once they went into training LLMs on all code they host they made sure no one else can do the same easily and so now everything is madly rate limited.
Every time I just clone and grep.
Everything is fully and completely explained, in terms which mean nothing.
(They ain't perfect, of course.)
Also, git store the files in a smarter way so file size won't explode like zip versioning.
Or previous versions. Plural. Yes.
Well, that's one half of git. The other half is tooling to work with the snapshots and their history, eg to perform merges.
If you push rewritten history to master, you're a git.
Conclusion: learn your tools.
The thing is, we could have done better (and have been) since before git even existed.
Everything surrounding code: issues, CICD, etc, is obviously another story. But it's not a story that is answered by distributed git either. (though I would love a good issue tracking system that is done entirely inside git)
That's what Github is though, it's not about the code itself it's about all your project management being on Github, and once you move it, moving out isn't realistic.
That's how we started out.
The issue tracking can be a branch and then you just need a compatible UI. In fact some git front ends do exactly this.
CI/CD does already exist in git via githooks. And you’re already better off using make/just/yarn/whatever for your scripts and rely as little on YAML as possible. It’s just a pity that githooks require users to set up each time so many people simply don’t bother.
There are several such solutions already. The problem is that neither of them is popular enough to become a de facto standard. And, of course, centralized git providers like GitHub have a vested interest in keeping in this way, so they are unlikely to support any such solution even if it does become popular enough.
For the actual event we are commenting on, they have disabled all features other than code hosting and PRs.
It's very silly they have to do this, but at least they can I suppose.
Sad to see that Mozilla is becoming less and less what they promised to be once Google funding are depleting.
You could, but generally people can’t. They learn a set of narrow workflows and never explore beyond. GitHub use translates into GitLab use, but not into general git use workout a central repository.
> Everything surrounding code: issues, CICD, etc, is obviously another story. But it's not a story that is answered by distributed git either. (though I would love a good issue tracking system that is done entirely inside git)
Radicle offers one. CLI-based, too.
And tbh, that's how it should be for a version control system. Before git with its byzantine workflows and a thousand ways to do the same thing, version control (e.g. svn) was a thing that's just humming along invisibly in the background, something that you never had to 'learn' or even think about, much like the filesystem.
I don't need to know how a filesystem works internally to be able to use it.
And having a centralized store and history helps a lot to keep a version control system conceptually simple.
In git, working on your own branch is essential to not step on other people's feet and to get a clean history on a single main/dev branch (and tbf, git makes this easy for devs and text files). With a centralized version control system, both problems don't even exist in the first place.
When we did game development with a team of about 100 peeps (about 80 of those non-devs, and about 99% of the data under version control being in binary files) we had a very simple rule:
(1) do an update in the morning when you come to work, and (2) in the evening before you leave do a commit.
Everybody was working on the main branch all the time. The only times this broke was when the SVN server in the corner was running full and we either had to delete chunks of history (also very simple with svn), or get more memory and a bigger hard drive for the server.
Subversion also isn't some thing humming along invisibly in the background, it has its own quirks that you need to learn or you'll get stung.
Tbh, I really wonder where the bad reputation of svn is coming from. Git does some things better, especially for 'programmer-centric teams'. But it also does many things worse, especially in projects where the majority of data is large binary files (like in game development) - and it's not like git is any good either when it comes to merging binary data.
We used TortoiseSVN as UI which worked well both for devs and non-devs.
With this sort of setup, git would break down completely if it weren't for awkward hacks like git-lfs (which comes with its own share of problems).
The point is you CAN. Joe can in theory do it, and Steve can make an alternative piece of software to do it for Joe. In most other centralized places (like social media), you CANNOT. Joe cannot take his data off of Facebook and interact with it outside of the platform or move it to another platform.
If you happen to agree with it, then yeah, it's great. If you like to commit quick and dirty and then tidy it up by squashing into logically complete and self-consistent commits, too bad.
However, were you to say liken-able (slang keywords: comparative something else--) of, "fossil with git-github", then again: no.
Good call were the conversation (comments, almost interchangeable at-times haha!) being, everyone use git for Firefox, something kinda wild-topic!
Embrace, Extend..
(largely this is unfair, as plain git leaves much to be desired- but you can’t deny that the things surrounding git on github are very sticky).
You might like git-bug:
> Everything surrounding code: issues, CICD, etc, is obviously another story. But it's not a story that is answered by distributed git either. (though I would love a good issue tracking system that is done entirely inside git)
There is https://github.com/git-bug/git-bug - would love if people started o use it, even in a read only way: use github issues normally, but also have a bot that saves all coments to git-bug, so that i can read issues without an internet connection. Then, at a later date, make it so that people that make issues on git-bug also gets the issue posted on github, making a two way bridge.
Then, optionally, at a later stage when almost everyone migrated to git-bug, make the github issues a read only mirror of the git-bug issues. Probably not worth it: you lose drive-by comments from newcomers (that already have a github account but probably never heard of git-bug), raising the friction to report bugs
The literal project we are discussing is just code. It's literally just code. It doesn't have issues, PRs are disabled as much as they can be (by a GitHub action that automatically closes all PRs with a note that code should be submitted elsewhere), and all "other stuff" is disabled.
Some big repos or organizations might be able to pull this off, but good luck having a small project and then directing users to go through all of those hoops to submit issues somewhere else, open PRs somewhere else, etc.
https://github.com/git-bug/git-bug/blob/master/doc/usage/thi...
I have not tried it.
It wont be free software and, likely, it will be Microsoft.
It's not like the hairy C++ code base of Firefox will suddenly become less scary and attract more open source developers simply because it's moving to Github.
Didn't all this start with Linus getting into a spat with the bitkeeper dev involving some sort of punitive measure as a response to somebody making a reverse-engineered FOSS client? I don't remember the details and I'm sure I have at least half of them wrong, but that's easily one of the most disastrous decisions in the history of the software-business right up there with valve turning down minecraft and EA refusing to make sports games for the SEGA dreamcast (that last one isn't as well known but it led to SEGA launching the 2k sports brand to which outlasted the dreamcast and eventually got sold to a different company but otherwise still exists today and is still kicking EA's ass on basketball games).
It's a joke that the bitkeeper dev has two revision control named after him, Mercurial and Git.
And while NBA 2k destroyed NBA Live it took until 2009 for that to start happening (long after Sega ownership), mainly down to sliding standards in EA’s NBA Live titles and eventually some disastrous EA launches.
But there were already quite a handful of other distributed version control systems around by the time git showed up.
So if Linus hadn't written git, perhaps we would be using darcs these days. And then we'd be debating whether people are using darcs the way it was intended. Or bazaar or monotone or mercurial etc.
I don't think what the original authors of any one tool intended matters very much, when there were multiple implementations of the idea around.
That's the default. But git would work just as well, if by default it was only cloning master, or even only the last few commits from master instead of the full history.
You can get that behaviour today, with some options. But we can imagine an alternate universe were the defaults were different.
Most of what you say, eg about not needing lockfiles and being able to make independent offline commits, still applies.
I am contributing to a few open source projects on GitHub here and there though.
Git is by far the most widely used VCS. The majority of code hosting services use it.
People who are very insistent on distributed solutions never seem to understand that the economic, social and organizational reasons for division of labor, hierarchy and centralization didn't suddenly go away.
And: Even though source of truth is centralized for many projects in GitHub, git still benefits from being distributed: It's the basis for "forks" on VithUb and for the way people develop. Ja jung the clone locally and committing locally and preparing the change set for review. In the CVS/SVN days one had to commit to the ce teal branch way sooner and more direct.
Then later on for the PR, you can sanitise the whole thing for review.
In the bad old days, you only got the latter. (Unless you manually set up an unrelated repository for the former yourself.)
Gitorious was chosen for the meego/maemo team for example.
And I am one of the people saddened by the convergence on a single platform.
But you can't deny, it's always been pretty great.
In the Linux kernel the project management is done via email (which is also just a centralized webserver in the end), so whats the problem?
From what I use composer and brew relies on GitHub to work.
If you weren't connected to the internet, you couldn't do a thing. You couldn't checkout. You couldn't commit. You could create branches. The only thing on your computer was whatever you checked out last time you were connected to the server.
People talk about SVN, but it wasn't that common in 2005. None of the project hosting platforms (like SourceForge) supported SVN, they were all still offering CVS. If you wanted to use SVN, you had to set it up on your own server. (From memory, google code was the first to offer SVN project hosting in mid-2006). Not that SVN was much better than CVS. It was more polished, but shared all the same workflow flaws.
Before Git (and friends), nothing like pull-requests existed. If you wanted to collaborate with someone else, you either gave them an account on your CVS/SVN server (and then they could create a branch and commit their code), or they sent you patch files over email.
The informal email pull requests of git were an improvement... though you still needed to put your git repo somewhere public. Github and its web-based pull requests were absolutely genius. Click a button, fork the project, branch, hack, commit, push, and then create a formal "pull request". It was nothing like centralised project management systems before it. A complete breath of fresh air.
And it was actually part of git. Even back in 2005, git included a script git request pull that generated these pull request emails. I'm pretty sure people called these emails "pull requests" before GitHub came along.
I store my code in a completely distributed fashion, often in several places on different local devices (laptop, build server, backup, etc) not to mention on remote systems. I use github and gitlab for backup and distribution purposes, as well as alternative ways people can share code with me (other than sending patch emails), and other people use git to get and collaborate on my work.
distributed version control system doesn't mean distributed storage magically happens. You still need to store your code on storage you trust at some level. The distributed in DVCS means that collaboration and change management is distributed. All version control operations can be performed on your own copy of a tree with no other involvement. Person A can collaborate with person B, then person B can collaborate with person C without person A being in the loop, etc.
The general issue that git has is making them interact with each other, I would love for git to get distributed issues, and a nice client UI that is actually graphical and usable by non-terminal users.
There were some attempts to make this distributed and discoverable via similar seed architectures like a DHT. For example, radicle comes to mind.
But staying in sync with hundreds of remotes and hundreds of branches is generally not what git is good at. All UIs aren't made for this.
I'm pointing this out because I am still trying to build a UI for this [1] which turned out to be much more painful than expected initially.
The killer feature is collocation of features to a single forge, combined with a generous free tier it’s the windows xp of the ecosystem: everybody has it, everybody knows it, almost nobody knows anything else.
As for PRs: I'm sure Mozilla welcome contributions, but accepting GitHub PRs is going to be a recipe for thousands of low-value drive-by commits, which will require a lot of triage.
I agree it is rather basic but I don't see how it's hard to navigate.
> accepting GitHub PRs is going to be a recipe for thousands of low-value drive-by commits, which will require a lot of triage.
I don't think that really happens based on what I've seen of other huge projects on GitHub.
Now it has "main" and "autoland", what are they? Which one is the equivalent of mozilla-central before?
The "new" git default branch name is 'main' and 'autoland' existed before next to 'mozilla-central' and is the one where commits usually appear first.
Commits land in autoland and get backed out if they cause test failures. That's merged to main ~twice per day when CI is happy
I've mostly encountered these branches/repos when checking commits linked to Bugzilla tickets, and I don't recall seeing "autoland" show up too much in those cases.
mozilla-central has a LOT of tests -- each push burns a lot of compute hours.
I think you can dislike the general move to a service like GitHub instead of GitLab (or something else). But I think we all benefit from the fact that Firefox's development continues and that we have a competing engine on the market.
Every contributor is valuable, it's in the name, the definition of "contribute".
Any bar to entry is bad, it certainly never is the solution to a different problem (not being able to manage all contributions). If anything, in the longer run, it will only make it worse.
Now, to be clear, while I do think GitHub is currently the "solution" to lower barriers, allow more people to contribute and as such improve your Open Source Project, the fact this is so, is a different and other problem - there isn't any good alternative to Github (with broad definitions of "good") why is that and what can we do to fix that, if at all?
Diversity, here too, is of crucial importance. It's why some Open Source software has sublime documentation and impeccible translations, while the other is technically perfect but undecipherable. It's why some Open Source software has cute logos or appeals to professionals, while the other remains this hobby-project that no-one ever takes serious despite its' technical brilliance.
Proposed contributions can in fact have negative value, if the contributor implements some feature or bug fix in a way that makes it more difficult to maintain in the long term or introduces bugs in other code.
And even if such contribution is ultimately rejected, someone knowledgeable has to spend time and effort reviewing such code first - time and effort that could have been spend on another, more useful PR.
In practice, if you get dozens of PRs from people who clearly did it to bolster up their CV, because their professor asked them or something like that, it just takes a toll. It's more effort than writing the same code yourself. Of course I love to mentor people, if I have the capacity. But a good chunk of the GitHub contributions I've worked on were pretty careless, not even tested, that kind of thing. I haven't done the maintainer job in a while, I'm pretty terrified by the idea of what effect the advent of vibe coding had on PR quality.
I feel pretty smug the way I'm talking about "PR quality", but if the volume of PRs that take a lot of effort to review and merge is high enough, it can be pretty daunting. From a maintainer perspective, the best thing to have are thoughtful people that genuinely use and like the software and want to make it better with a few contributions. That is unfortunately, in my experience, not the most common case, especially on GitHub.
Alternatives to github
We lament Google's browser engine monopoly, but putting the vast majority of open source projects on github is just the expected course to take. I guess we'll repeat history once microsoft decides to set in the enshittification, maybe one day mobile OSes replace Windows and they're strapped for cash, who knows, but it's a centralised closed system owned by a corporation that absolutely adores FOSS
I don't mind any particular project (such as this one) being in Github and I can understand that Mozilla chooses the easy path, they've got bigger problems after all, but it's not like there are no concerns with everyone and everything moving to github
GitLab? It was awful. Slow, and paying for that kind of experience felt like a bad joke. It's much better now but it was borderline unusable back in the day.
Or SourceForge, before Git was mainstream? Also terrible.
GitHub succeeded because it quickly established itself as a decent way to host Git - not because it was exceptional, but because the competition had abysmal UX.
Unlike other lock-in-prone services, moving a Git project is trivial. If GitHub loses its advantages due to enshittification, you just move. Case in point: Mozilla hopping on and off GitHub, as this article shows.
Not to mention the AI-generated security "issues" that are reported against curl, for example, suggests there can indeed be negative value for reports, and contributions.
A lot more contributions on GH, but the majority of them ignored guidelines and/or had low code quality and attention to detail. Just my anecdotal experience of course.
Both patches have been ignored thus far. That's okay, I understand limited resources etc. etc. Will they ever be merged? I don't know. Maybe not.
I'm okay with all of this, it's not a complaint. It's how open source works sometimes. But it also means all that time I spent figuring out the contribution process has been a waste. Time I could have spent on more/other patches.
So yeah, there's that.
It's certainly true that making the bar higher will reduce low-quality contributions, because it will reduce ALL contributions.
(aside: FreeBSD does accept patches over GitHub, but it also somewhat discourages that and the last time I did that it also took a long time for it to get reviewed, although not as long as now)
There's no easy solution. Much like the recent curl security kerfuffle, the signal:noise ratio is important and hard to maintain.
For projects that I'd be interested in being a long-term contributor to, this is obviously different, but you don't become a long-term contributor without first dealing with the short-term, and if you make that experience a pain, I'm unlikely to stick around.
A big part of this is the friction in signing up; I hope federated forges become more of a thing, and I can carry my identity around and start using alternate forges without having to store yet another password in my password manager.
no.
Being a good coder has absolutely no correlation to being good at using Mercurial.
* contributors need to start somewhere, so even broken PRs can lead to having a valuable contributor if you're able to guide them.
Somehow I think you're holding the difficulty scale backwards!
I was thinking something different: I wonder whether Mozilla considered GitLab or Codeberg, which are the other two I know that are popular with open source projects that don't trust GitHub since it sold out to Microsoft.
(FWIW, Microsoft has been relatively gentle or subtle with GitHub, for whatever reason. Though presumably MS will backstab eventually. And you can debate whether that's already started, such as with pushing "AI" that launders open source software copyrights, and offering to indemnify users for violations. But I'd guess that a project would be pragmatically fine at least near term going with GitHub, though they're not setting a great example.)
"It depends", as always, but codeberg lacks features (that your use-case may not need, or may require), uptime/performance (that may be crucial or inconsequential to your use-case), familiarity (that may deter devs), integration (that may be time-consuming to build yourself or be unnessecary for your case) etc etc.
Now, both the desktop and the mobile version will be on Github, and the "issues" will stay on Bugzilla.
This will take advantage of both GitHub's good search and source browsing and Git's familiar system.
As a former Firefox and Thunderbird contributor, I have to say that I used local search instead of trying to find something on the mozilla-central website.
Of course, when you're actively developing software, you search inside your IDE, but allowing to find things easily on the website makes it more welcoming for potential new contributors.
On the contrary, I find searchfox to be the best code navigation tool I used. It has nice cross-language navigation features (like jumping from .webidl interface definition to c++ implementation), it has always-on blame (with more features too) and despite that it's really fast and feels extremely lightweight compared to GitHub interface. I really wish I had this with more projects, and I'll be sad if it ever dies.
Then MXR got replaced by DXR, itself replaced in 2020 by Searchfox (introduced in 2016).
On the other hand, the plethora of different self-hosted platforms with limited feature sets is a huge pain. Just finding the repo is often a frustrating exercise, and then trying to view, or worse, search the code without checking it out is often even more frustrating or straight out impossible.
But it’s a lot of work to prevent abuse, especially for resource intensive features when supporting unsigned-in use cases.
Surely most open source projects have a link to their source code? Whether it's github, gitlab, sourcehut, or anything else?
https://github.com/torvalds/linux
// EDIT: Source: https://news.ycombinator.com/item?id=43970574
https://github.com/mozilla-firefox/firefox/blob/main/.github...
I get it from GitHub’s perspective, it’s a nudge to get people to accept the core premise of ”social coding” and encouraging user pressure for mirrored projects to accept GitHub as a contribution entrypoint. I’m impressed by their successes and would attribute some of that to forced socialization practices such as not allowing PRs to be disabled. I’ve grown to dislike it and become disillusioned by GitHub over the course of a long time, but I’m in awe of how well it has worked for them.
Their docs was also a mess back then and made me recompile everything even if it wasnt needed.
To give a bit of additional context here, since the link doesn't have any:
The Firefox code has indeed recently moved from having its canonical home on mercurial at hg.mozilla.org to GitHub. This only affects the code; bugzilla is still being used for issue tracking, phabricator for code review and landing, and our taskcluster system for CI.
In the short term the mercurial servers still exist, and are synced from GitHub. That allows automated systems to transfer to the git backend over time rather than all at once. Mercurial is also still being used for the "try" repository (where you push to run CI on WIP patches), although it's increasingly behind an abstraction layer; that will also migrate later.
For people familiar with the old repos, "mozilla-central" is mapped onto the more standard branch name "main", and "autoland" is a branch called "autoland".
It's also true that it's been possible to contribute to Firefox exclusively using git for a long time, although you had to install the "git cinnabar" extension. The choice between the learning hg and using git+extension was a it of an impediment for many new contributors, who most often knew git and not mercurial. Now that choice is no longer necessary. Glandium, who wrote git cinnabar, wrote extensively at the time this migration was first announced about the history of VCS at Mozilla, and gave a little more context on the reasons for the migration [1].
So in the short term the differences from the point of view of contributors are minimal: using stock git is now the default and expected workflow, but apart from that not much else has changed. There may or may not eventually be support for GitHub-based workflows (i.e. PRs) but that is explicitly not part of this change.
On the backend, once the migration is complete, Mozilla will spend less time hosting its own VCS infrastructure, which turns out to be a significant challenge at the scale, performance and availability needed for such a large project.
If I may - what were the significant scale challenges for self hosted solution?
I used a GitLab + GitLab Runner (docker) pipeline for my Ph.D. project which did some verification after every push (since the code was scientific), and even that took 10 minutes to complete even if it was pretty basic. Debian's some packages need more than three hours in their own CI/CD pipeline.
Something like Mozilla Firefox, which is tested against regressions, performance, etc. (see https://www.arewefastyet.com) needs serious infrastructure and compute time to build in n different configurations (stable / testing / nightly + all the operating systems it supports) and then test at that scale. This needs essentially a server farm, to complete in reasonable time.
An infrastructure of that size needs at least two competent people to keep it connected to all relevant cogs and running at full performance, too.
So yes, it's a significant effort.
I'm not claiming that my comment was 100% accurate, but they plan to move some of the CI to GitHub, at least.
Given the frequency I see comments on this site about Mozilla trying to do far too much rather than just focusing their efforts on core stuff like Firefox, I'm honestly a bit surprised that there aren't more people agreeing with this decision. Even with the other issues I have with Mozilla lately (like the whole debacle over the privacy policy changes and the extremely bizarre follow-up about what the definition of "selling user data" is), I don't see it as hypocritical to use GitHub while maintaining a stance that open solutions are better than closed ones because I think trying to make an open browser in the current era is a large and complicated goal for it to be worth it to set a high bar for taking on additional fights. Insisting on spending effort on maintaining their own version control servers feels like a effort that they don't need to be taking on right now, and I'd much rather than Mozilla pick their battles carefully like this more often than less. Trying to fight for more open source hosting at this point is a large enough battle that maybe it would make more sense for a separate organization focused on that to be leading the front in that regard; providing an alternative to Chrome is a big enough struggle that it's not crazy for them to decide that GitHub's dominance has to be someone else's problem.
Your guess is wrong as Firefox doesn't use GitHub for any of that, and AFAIK there are no plans to either.
The blog post linked in the top comment goes in to this in some detail, but in brief: git log, clone, diff, showing files, blame, etc. is CPU expensive. You can see this locally on large repo if you try something like "git log path/to/dir".
Add to this all the standard requirements of running any server that needs to be 1) fast, and 2) highly available.
And why bother when there's a free service available for you?
Firefox does indeed have a large CI system and ends up running thousands of jobs on each push to main (formerly mozilla-central), covering builds, linting, multiple testsuites, performance testing, etc. all across multiple platforms and configurations. In addition there are "try" pushes for work in progress patches, and various other kinds of non-CI tasks (e.g. fuzzing). That is all run on our taskcluster system and I don't believe there are any plans to change that.
The obvious generic challenges are availability and security: Firefox has contributors around the globe and if the VCS server goes down then it's hard to get work done (yes, you can work locally, but you can't land patches or ship fixes to users). Firefox is also a pretty high value target, and an attacker with access to the VCS server would be a problem.
To be clear I'm not claiming that there were specific problems related to these things; just that they represent challenges that Mozilla has to deal with when self hosting.
The other obvious problem at scale is performance. With a large repo both read and write performance are concerns. Cloning the repo is the first step that new contributors need to take, and if that's slow then it can be a dealbreaker for many people, especially on less reliable internet. Out hg backend was using replication to help with this [1], but you can see from the link how much complexity that adds.
Firefox has enough contributors that write contention also becomes a problem; for example pushing to the "try" repo (to run local patches through CI) often ended up taking tens of minutes waiting for a lock. This was (recently) mostly hidden from end users by pushing patches through a custom "lando" system that asynchronously queues the actual VCS push rather than blocking the user locally, but that's more of a mitigation than a real solution (lando is still required with the GitHub backend because it becomes the places where custom VCS rules which previously lived directly in the hg server, but which don't map onto GitHub features, are enforced).
[1] https://mozilla-version-control-tools.readthedocs.io/en/late...
It is free and robust, and there is not much bad Microsoft can do to you. Because it is standard git, there is no lockdown. If they make a decision you don't like, migrating is just a git clone. As for the "training copilot" part, it is public, it doesn't change anything that Microsoft hosts the project on their own servers, they can just get the source like anyone else, they probably already do.
Why not Codeberg? I don't know, maybe bandwidth, but if that's standard git, making a mirror on Codeberg should be trivial.
That's why git is awesome. The central repository is just a convention. Technically, there is no difference between the original and the clone. You don't even need to be online to collaborate, as long as you have a way to exchange files.
It's a pet-peeve and personal frustration of mine. "Do one thing and do that well" is also often forgotten in this part of Open Source projects. You are building a free alternative to slack? spend every hour on building the free alternative to slack, not on selfhosting your Gitlab, operating your CI-CD worker-clusters or debugging your wiki-servers.
mlenz•4h ago
temp0826•4h ago
https://wiki.mozilla.org/GitHub#other_github
moontear•4h ago
pornel•28m ago
GitHub also has a lot of features and authentication scopes tied to the whole org, which is pretty risky for an org as large as Mozilla.
sofixa•4h ago
Unfortunately often the cleaner option is to create a separate org, which is a pain to use (e.g. you log in to each separately, even if they share the same SSO, PATs have to be authorised on each one separately, etc).
In Gitlab, you would have had one instance or org for Mozilla, and a namespace for Firefox, another one for other stuff, etc.
captn3m0•3h ago
sofixa•3h ago
It's like AWS accounts vs GCP projects. Yeah, there are ways around the organisational limitations, but the UX is still leaky.