Also, is automated version bumps really such a good thing? Many times I have wasted hours tracking down a bug that was introduced by bumping library. Sometimes only the patch version of the library is different so it shouldn't be breaking anything... but it does! It is so much better to update intentionally, test, deploy. Though this does assume you have a modest number of dependencies which pretty much excludes any kind of server-side javascript project.
(The larger problem here isn’t even Dependabot per se, since all Dependabot does is fire PRs off. The problem is that people then try to automate the merging of those PRs, and end up shooting themselves in the foot with GHA’s more general footguns. It also doesn’t help that, until recently, GitHub’s documentation recommended using these kinds of dangerous triggers for automating Dependabot.)
Really? Dependabot runs on a number of my repositories without my having consciously enabled it.
I've never experienced this. Do you have a `.github/dependabot.yml` file in your repository? That's how it's enabled.
(GitHub has muddied the water here a bit by having two related but distinct things with the same name: there's "Dependabot" the subject of this post, and then there's "Dependabot security updates" which are documented separately and appear to operate on a different cycle[1]. I don't know if this latter one is enabled by default or not, but the "normal" one is definitely disabled until you configure it.)
[1]: https://docs.github.com/en/code-security/dependabot/dependab...
Nope. Example: https://github.com/m50d/tierney/pull/55
Do you have a Dependbot entry in your account/org-level applications?
I don't think so. I have no memory of such a thing, and there is no org.
The corollary of reviewing all code on all dependency updates is you should review all code or the new deps you add, including the transformation by build processes that might mean what is in the package manager might be different and same for all transitive dependencies.
Same with the language and runtime tooling.
It is too hard to be perfect!
I get your question regarding scaling, but that's the job: you can choose to outsource code to 3rd-party libraries, and eternal vigilance is the trade-off.
Assume your 3rd-party dependencies will try to attack you at some point: they could be malicious; they could be hacked; they could be issued a secret court order; they could be corrupted; they could be beaten up until they pushed a change.
Unless you have some sort of contract or other legal protection and feel comfortable enforcing them, behave accordingly.
0: https://www.wiz.io/blog/github-action-tj-actions-changed-fil...
Still have flashbacks from that one time when some dependency in our Go project dropped support for go1.18 in its patch version update, and we almost couldn't rebuild the project before the Friday evening. Because obviously /s being literally unable to build the dependency is a backwards-compatible change.
Depends. Do you want to persist the belief that software requires constant maintenance because it's constantly changing? Then yes: automate your version bumps and do it as often as possible.
If you want software to be stable then only update versions when you have a bug.
1. Malicious code is injected into some project.
2. People have a chance to pick it up and put it into their code.
3. The malicious code is found, publicized, and people react.
The faster you act after step 1, the more chance you'll have time to put it into your system before the world reaches step 3. Dependabot maximizes the speed of reaction after step one. If I'm doing things somewhat more manually then I'm much more likely to experience the world finding out about a corrupted dependency before I start incorporating it.Now, just typing it out it may sound like I'm more freaked out than I actually am. While supply-chain attacks are a problem, they are getting worse, and they will continue to get worse, they are also still an exotic situation bubbling on the fringe of my awareness, as opposed to something I'm encountering regularly. For a reasonable project the most likely outcome is that dependabot enhancing this exposure window will still not have any actual real-world impact, and I'm aware of that. However, where this is relevant is if you are thinking of Dependabot and its workflow as a way of managing security risk, because you imagine updates as likely carrying security improvements, and that's your primary purpose for using it (as opposed to other uses, such as, your system slowly falling behind in dependencies until it calcifies and can't be updated without a huge degree of effort, a perfectly reasonable threat and Dependabot is a sensible response to that), then you also need to consider the ways in which is may actually increase your vulnerability to threats like supply-chain attacks.
And of course, projects do not start out with all their vulnerabilities on day one and then monotonically remove them. Many vulnerabilities are introduced later. For each such vulnerability, there is a first release that includes them, and for which treating that update as if it was just a Good Thing was in fact not true, and anyone who pushed it in as quickly as possible made a mistake. Unfortunately, sometimes hard problems are just hard problems.
Though I have wondered about the idea of programming something like Dependabot, but telling it, hey, tell me about known CVEs and security releases, but otherwise, let things cook for 6 months before automatically building a PR for me to update. That would radically reduce this risk I'm outlining here.
(In fact, after pondering, I'm kind of reminded of how Debian and a lot of Linux distros work, with their staged Cutting Edge versus Testing versus Stable versus Long Term Support. Dependabot sort of builds in the presumption that you want that Cutting Edge level of updates... but in many cases, no, I really don't. I'd much rather build with Stable or Long Term Support for a lot of things, and dip into the riskier end of the pool for specific things if I need to.)
https://docs.github.com/en/code-security/dependabot/dependab...
https://docs.github.com/en/code-security/dependabot/dependab...
I don’t have the exact exam language in front of me right now but the requirement would be something like “you have some process for learning about, assessing, and mitigating vulnerabilities in software dependencies that you use”.
Enabling an automated scan and version bump tool like dependabot is a common and easy way to prove your organization has those capabilities. But you could implement whatever process you want here and prove that you do it on the schedule you say you do in order to satisfy the audit requirement.
> Here's the trick: github.actor does not always refer to the actual creator of the Pull Request. It's the user who caused the latest event that triggered the workflow.
Also pull_request_target is a big red flag in any GHA and even highlighted in GHA docs. It’s like running untrusted code with all your secrets handed over to it.
For better or worse, it's a pattern that GitHub explicitly documents[1].
(An earlier version of this page also recommended `pull_request_target`, hence the long tail of public repositories that use it.)
[1]: https://docs.github.com/en/code-security/dependabot/working-...
No? In what world would it be safe to merge code, AI-generated or not, which you haven't reviewed, much less do it automatically without you even knowing it happened?
How do you know that you need the changes (whether bug or CVE)? How do you know the code isn't malicious? How do you know your systems are compatible with the change? How do you know you won't need to perform manual work during the migration?
woodruffw•10h ago
The bottom line with these kinds of things is that virtually nobody should be using `pull_request_target`, even with “trusted” machine actors like Dependabot. It’s a pretty terrible footgun.
[1]: https://www.synacktiv.com/en/publications/github-actions-exp...
gdubya•9h ago
woodruffw•8h ago