What the fuck.
Arko wanted a copy of the HTTP Access logs from rubygems.org so his consultancy could monetize the data, after RC determined they didn't really have the budget for secondary on-call.
Then after they removed him as a maintainer he logged in and changed the AWS root password.
In a certain sense this post justifies why RC wanted so badly to take ownership - I mean, here you have a maintainer who clearly has a desire to sell user data to make a buck - but the way it all played out with terrible communication and rookie mistakes on revoking access undermines faith in RC's ability to secure the service going forward.
Not to mention no explanation here of who legally "owned" the rubygems repo (not just the infra) and why they thought they had the right to claim it, which is something disputed by the "other" side.
Just a mess all around, nobody comes off looking very good here!
There's no actual control improvements here, just "we'll follow our procedures better next time" which imo is effectively doing nothing.
Also this is really lacking in detail about how it was determined that no PII was accessed. What audit logs were checked? Where was this data stored?
Overall this is a super disappointing postmortem...
I am curious what preventative measures you expect in this situation? To my knowledge it is not actually possible to disable the root account. They also had it restricted to only 3 people with MFA which also seems pretty reasonable.
It is not unheard of that there could be a situation where your ability to login through normal means (like lets say it relies on Okta and Okta goes down) and you need to get into the account, root may be your only option in a disaster situation. Given this was specifically for oncall someone having that makes sense.
Not saying there were not failures because there clearly are, but there have been times I have had to use root when I had no other option to get into an account.
I don't think the post mortem details whether the root access was on the org management account or an org member account.
As for other flows (break glass, non-SSO etc), that can all be handled using IAM users. You'd normally use SAML to assume a role, but when SSO is down you'd use your fallback IAM user and then assume the role you need.
As for how you disable the root account: solo accounts can't, but you can still prevent use/mis-use by setting a random long password and not writing it down anywhere. In an Org, the org can disable root on member accounts.
If you have the ability to go through the reset flow than then why is that much different than the username and password being available to a limited sets of users. That would not have prevented this from happening if the determination was made that all 3 of these users need the ability to possibly get into root.
As far as having an IAM user, I fail to see how that is actually that much better. You still have a user sitting there with long running credentials that need to be saved somewhere that is outside of how you normally access AWS. Meaning it is also something that could be easily missed if someone left.
Sure yes you could argue that the root user and that IAM user would have drastically different permissions, but the core problem would still exist.
But then you are adding another account(s) on top of the root account that must exist that you now need to worry about.
Regardless of the option you take, the root of the problem they had was 2fold. Not only did they not have alerts on the usage of the root account (which they would still need if they switched to having long running IAM users instead, but now they would also need to monitor root since that reset flow exists) and their offboarding workflow did not properly rotate that password, which a similar problem would also exist with a long running IAM user to delete that account.
At the end of the day there is not a perfect solution to this problem, but I think just saying that you would never use root is ignoring several other issues that don't go away just by not using root.
As for all the other stuff: what it does is it creates distinct identities with distinct credentials and distinct policies. It means that there is no multi-party rotation requires, you can nuke the identity and credentials of a specific person and be done with it. So again, a real solution to a real problem.
I am wondering. Did they at least have MFA enabled on the root login or not ?
> Ruby Central failed to rotate the AWS root account credentials (password and MFA) after the departure of personnel with access to the shared vault.
Also in this day and age, there's no reason to have the root account creds in a shared vault, no-one should ever need to access the root account, everyone should have IAM accounts with only the necessary permissions.
absolutely
> no-one should ever need to access the root account
someone has to be able to access it (rarely)
if you're a micro-org having three people with the ability to get it doesn't seem that bad
everything else they did is however terrible practice
This sure doesn't reflect all this supposed professionalism and improvements RC was supposed to make.
Years ago, I decided with all the DHH drama, that using Rails was too much of a liability and this shit just makes the whole Ruby ecosystem a liability to anything build in that ecosystem.
All this proposal does is request from one of the maintainers/on-call providers? another entry in this Privacy Notice as a part of a payment deal.
This is a mess, but it also unnecessary smears both sides. It calls out that RubyCentral had poor cloud management in place, and it trashes an on-call provider.
This is a terrible postmortem and all it does is advertise to users that RubyCentral doesn't know what it's doing.
Unfortunately, many enterprises follow the poor practice of storing shared credentials in a shared password manager without rotating them when an employee with prior access leaves the company.
I think we need an f-droid-like project for Rubygems that builds the gems from source, and takes care of signing, and is backed by a non-profit that is independent from Rails/Shopify
You could pre-resolve every dependency in your chain to a git repository, even to a fork under your own control, but that will end up being a maintenance nightmare.
Can some vps/serverless provider not do this like fly.io as an recent example with kurt got got? or hetzner?
I think that golang's model can actually be sort of cheaper/ more cost effective for servers as compared to how ruby might be doing it right now and so cheaper might mean that a new non profit can be created which can work on less money/outside funding/drama overall
Not even sure why you are being downvoted, this is such a great idea actually.
F-droid has been so professional and they are just so professional
There was this developer (axet) who recently accused f-droid of somehow convincing the users "maliciously" that the funds are going to the the creator and f-droid when in reality it was going to f-droid and he name called them and what not..
Do you know what f-droid team still said?
They said that they can help him in the donation process and remove theirs and they actually took some feedback from what I know...
They clarified that the donations in their about page that the money that you donate through f-droid in their website's homepage donate goes to f-droid only which should be obvious but for some it wasn't
they also had f-droid donate in the website links of apps and I am not sure when they stopped it but they also stopped it and I deeply deeply respect it.
Like, okay maybe mistakes happen but f-droid is seriously good corporation. We might need something like that for sure. I genuinely think that out of thinking about open source so much, I realized that we need to have priorities to share things about open source.
F-droid is on the top of the list, its just that great, then there is signal/grapheneos or maybe all 3 are on top...
F-droid as an organization is something that I deeply appreciate and its a shame of google's attestation. I genuinely love f-droid nowadays.
Expressing negative opinions about DHH is not well-received here.
Oddly enough the Ruby community includes both the most thoughtful and gentle people and the biggest assholes I know... I refuse to believe the latter are not fringe.
at least a misdemeanor. most of the time its prosecuted, a felony.
Horrible time to own/run a consultancy. Can't imagine what his other customers are thinking right now.
I brought up multiple times that his actions were suspicious, was downvoted. Now proof of that plus an email trying to low-key extort RubyCentral into allowing him to sell user data...
If there's any evidence that you need to know who the proper stewards of Ruby's gems are, it's this.
Your post suggests conspiratorial thinking when there shouldn't be.
First of all, it's criminal, and second of all, it absolutely lights a torch to any credibility they have. I expect people don't want to become unhireable.
I've had access/credentials to organizations that I've left and never abused them even once.
They're claiming "no evidence of compromise" based on CloudTrail logs that AWS root could have deleted or modified. They even admit they "Enabled AWS CloudTrail" after regaining control - meaning CloudTrail wasn't running during the compromise window.
You cannot verify supply chain integrity from logs on a system where root was compromised, and you definitely can't verify it when the logs didn't exist (they enabled them during remediation?).
So basically, somebody correct me here if I'm wrong but ... Every gem published Sept 19-30 is suspect. Production Ruby applications running code from that window have no way to verify it wasn't backdoored. The correct response is to freeze publishing, rebuild from scratch (including re-publishing any packages published at the time? Ugh I don't even know how to do this! ) , and verify against offline backups. Instead they rotated passwords and called it done.
1. Admit that he was the unauthorized actor (which means he's probably admitting to a crime?) 2. Have him attest he didn't exfil or modify the integrity of service while committing a crime.
If I was Ruby Central I would give clemency on #1 in exchange for #2 and I think #2 helps Andre Arko.
I have been waiting to hear if there would be any civil action on it since it's not at all clear they had any rights to do most of what they did.
https://www.reddit.com/r/ruby/comments/1o2bxol/comment/ninly...
>> Why did Joel give so little time of advance notice before publishing his post revealing Andre’s production access? That struck me as irresponsible disclosure, but I may have missed something.
> I decided to publish when I did because I knew that Ruby Central had been informed and I wanted the world to be informed about how sloppy Ruby Central were with security, despite their security posturing as an excuse to take over open source projects.
> What I revealed changed nothing about Ruby Central’s security, since André had access whether I revealed that he did or not. When you have security information that impacts lots of people, you publish it so they can take precautions. That is responsible disclosure.
How can they ensure that nobody else did any tampering?
It seems RubyCentral did not think this through completely.
Also you can enable cloudtrail log validation which can ensure you know if you're looking at tampered logs or not.
Really it all depends on how their accounts are set up. Unless you know the operational details you can't make a call here.
I've run a multi-million dollar/year AWS Org for the last decade or so and setting things up this way is kind of brass tacks.
Based on how things have been described on both sides, it actually sounds like they do a pretty good job. Oversights happen -- we're all human -- and this access was already limited to a small single-digit number of people. Given the history, it's reasonable that Arko would have had this high of a level of access and the oversight was in forgetting that when removing him.
Also it's reasonable to assume that people with that access wouldn't do something criminal/malicious, and if they did, while annoying, the situation is very easily recoverable. Especially if you're using IaC tooling as you mentioned.
If you're already taking the position that Ruby Central are "the bad guys" it's easy to assume that they're doing everything wrong, but that would be a mistake.
this is the problem when you fire all the maintainers who do anything
1. Create another "management" AWS account, and make your other AWS account a child to that.
2. Ensure no one ever logs in to the "management" account, as there shouldn't be any business purpose in doing so. For example, you should require a hardware key to log in.
3. Configure the "management" account to force children account to enable AWS Config, AWS CloudTrail, etc. Also force them to duplicate logs to the "management" account.
Step 2 is important. At the end of the day, an organization can always find a way to render their security measures useless.
You can enable the persistent storage of trails. But you can always access 90 days of events regardless of that being enabled
CloudTrail can be configured to save logs to S3 or CloudWatch Logs, but I think that even if you were to disable, delete, or tamper with these logs, you can still search and download unaltered logs directly from AWS using the CloudTrail Events page.
* S3 object reads/writes (GetObject, PutObject) - these are "data events" requiring explicit configuration[0]
* SSH/RDP to EC2 instances - CloudTrail only captures AWS API calls, not OS-level activity[1]
With root access for 11 days, someone could modify gem files in S3, backdoor packages, SSH into build servers - none of it would appear in the logs they reviewed. Correct?
[0] https://docs.aws.amazon.com/awscloudtrail/latest/userguide/l...
[1] https://repost.aws/questions/QUVsPRWwclS0KbWOYXvSla3w/cloud-...
Organizations are also useful because you can attach SCPs to your accounts that deny broad classes of activities even to the root user.
"We collect information related to web traffic such as IP addresses and geolocation data for security-relevant events and to analyze how and where RubyGems.org is used."
(https://rubygems.org/policies/privacy)
"We may share aggregate or de-identified information with third parties for research, marketing, analytics, and other purposes, provided such information does not identify a particular individual."
I think they make a lot of overly strong claims here, even though there are plenty of alternative explanations possible. The mere fact that 3 people had AWS root access during this period but they only identify one and never question that it could have been one of the others is telling. They reallllly want you to just take it as obvious that 1) all these actions were taken by 1 individual and 2) that individual was malicious. Then they sprinkle in enough nasty sounding activities and info about Andre to get you to draw the conclusion that he is bad, and did bad things, and they had to do these things the way they did.
Using what reads like a business strategy email as a 'nefarious backstory' is so bad faith. I bet if you got access to all the board's emails you would see a ton of proposals for ways to support RubyGems that may not all sound great in isolation. They are being just transparent enough to bad mouth Andre while hiding any motivations from their end as purely 'security' related.
> 1. While Ruby Central correctly removed access to shared credentials through its enterprise password manager prior to the incident, our staff did not consider the possibility that this credential may have been copied or exfiltrated to other password managers outside of Ruby Central’s visibility or control.
> 2. Ruby Central failed to rotate the AWS root account credentials (password and MFA) after the departure of personnel with access to the shared vault.
I also highly recommend to not accept RubyCentral's current strategy to post very isolated emails and insinuate that "this is the ultimate, final proof". We all know that email conversation often requires lots of emails. So doing a piecemail release really feels strange. Plus, there also were in-person meetings - why does RubyCentral not release what was discussed here? Was there a conflict of interest due to financial pressure?
Also, as was already pointed out, RubyCentral went lawyering up already - see discussions on reddit. Is this really the transparency we as users and developers want to see? This is blowing up by the day and no matter from which side you want to look at it, RubyCentral sits at the center; or, at the very least, made numerous mistakes, tries to cover past mistakes by ... making more mistakes. I think it would be better to dissolve RubyCentral. Let's start from a clean state here; let's find rules of engagement that doesn't put rich corporations atop the whole ecosystem.
Last but not least - this tactical slandering is really annoying. If they have factual evidence, they need to bring the matter to a court; if they don't, they need to stop slandering people. To my knowledge RubyCentral hasn't yet started a court case, and I have a slight suspicious that they also will not, because we, as the general public, would then demand COMPLETE transparency, including ALL of RubyCentral's members and their activities here. So my recommendation is: wait for a while, let those accused respond.
Literally all we've heard so far is from the other side...
> If they have factual evidence, they need to bring the matter to a court
I'd be surprised if they aren't. This post feels very much like the amount of disclosure a lawyer would recommend to reassure stakeholders.
> rules of engagement that doesn't put rich corporations atop the whole ecosystem
Right now the only thing stopping us all from being held hostage by rogue maintainers is a rich corporation.
If I'm reading it right, it seems quite petty (and a bit cowardly). Arko was a maintainer was he not? How is that a breach? Presumably his credentials were not misbegotten, or is that the accusation?
WTF. This is the same guy that is launched gems.coop, a competing index for Ruby gems recently.
On the other hand, RubyCentral actions were truly incompetent, I don’t know anymore who is worse
Any part of this narrative could be false, but I don't see a way to read it and take it as true where Arko's actions would be OK.
xer0x•3h ago