ETA: According to a Reddit post linked elsewhere in this thread, the payload was a binary file downloaded by a python script in the repository. It has been uploaded to VirusTotal, but downloading requires a premium subscription according to their docs: https://www.virustotal.com/gui/file/d9f0df8da6d66aaae024bdca...
One could go even further and list all new commits, making it super easy for the user to check them. Maybe even integrate an LLM to help? Maybe commits from non long-time contributors could be flagged?
There has to be a way to help users programmatically review updates to their AUR packages. Even if most of them won't spend the time.
This particular issue is with a binary (i.e. pre-built) package, normally in Arch it's expected from an AUR package that you will build it yourself and most if not all packagers prompt you to review and or edit the PKGBUILD before it does anything.
Basically you could spot something suspicious in a source package, not so much in a binary package.
AUR clients already show you the diff if you update a package, but note that this were completely new packages anyway, uploaded 2 days ago, so that doesn't really apply here.
LLMs are useless for reviewing if something is malicious, their false-positive rates would be way to high. And even ignoring that you'd have to hide the LLMs code from the attacker or he can just check if his package is detected as malicious and modify it until it isn't. Not something open source projects are keen on doing.
The program I use for AUR (Rua) still displays exactly what you're about to build (as a git diff), before you build it, even if it's the first time/release. I'd assume all the other "AUR managers" would work the same way?
Also anyone who wants to try "Gaming on Linux" needs bleeding edge kernel which is Arch's default setup compared to other distros.
1) do you want an intermediary between you and the upstream? for example, to patch out telemetry
2) is it important that what you're using continues to work the same way so you can focus on your actual work?
No answer to either is consequence-free, e.g. for 1), see the Debian SSH patch event, or for 2), if the answer is "it doesn't work", then that kinda forces one's hand.
The "everything changing all at once" thing is what eventually drove me to arch (as the most popular at the time rolling release distro - and more stable at the time than debian sid), I'd personally rather have smaller breaking changes more frequently. Though it's probably less painful now to update debian versions than it use to be because things generally work better without configuration than they used to.
I love Arch Linux, but please...
(Arch Linux is already "fast" (depends on what you install for your DE, if any) and customizable.)
Gentoo with make.conf (/etc/portage/make.conf[1]) having "CFLAGS="-O3 -march=native -flto"" means that Gentoo, a Linux distribution, is performant?
[1] It is not a good idea to build everything with LTO or PGO enabled because not all packages support LTO / PGO cleanly. Do it on the basis of per-package.
For me it feels blazingly fast, even on obsolete KabyLake Core-I5/7(t) forcibly clocked down to about 800Mhz most of the times :)
It fucking flies without much effort. On modern systems even more so. While being rock solid. Without any crashes. Even under Plasma. When I'm reading about bugfixes regarding crashes under Plasma I just shrug and think "Waddya talkin about?". That may be hardware dependent, though, because they are old Lenovo Thinkcentres(1Litre SFF M910q tiny) with excellent firmware.
Using btrfs, profile-sync-demon, zram(Yes. Even with 32GB Ram!). Suspend/Resume working every single time. No glitches, hick-ups, ever. So far. Since 10th of June, 2024.
Edit: Almost always some music out of yt doodling in some bg-tab, in oh-so-slow FF, without any clicks, stuttering, or other breaks.
No need for yt-dlp, mpv at all. Except for dl/saving stuff, sometimes. While FF is rarely under 100 tabs.
My i3 with vim / emacs and even VSCodium flies too, on X Linux. :P
The browser is always the slowest in my case and this has always been my experience, and unfortunately it still is.
Sideberry (Tree-Style-Tabs like extension) was the ugliest offender there. Though that may have been me misconfiguring it. OTOH I didn't manage to find settings where it didn't do that, and still looked like I wanted it to. At that time, maybe a year ago, I've thought of it as potential 'instant ssd-killer'. Couldn't be bothered. Deinstalled. Now FF has some basic version of vertical tabs by Mozilla itself. It suffices(for now).
As for tabs, I like the way Vivaldi does it and allows me to customize.
By "Not even insane amounts of RAM, it usually takes about 4 to 5GB, rarely going to 8, then shrinking back a while after closing too much tabs." I meant to say that this applies to the resident size in RAM of the whole browser, not what PSD does, or adds. That would be just what your browser profile is using 'on disk'. Peanuts, so to speak.
With only 8GB it's really hard to tell. It depends on your usage patterns.
First, regarding just PSD, it relies on RuntimeDirectorySize= of https://man.archlinux.org/man/logind.conf.5 which by default is limited to use up to 10% of system memory, but not statically reserved, only "on demand".
Which in turn relies on tmpfs which can use up to half of system RAM by default. Again, on demand only, not statically reserved.
https://wiki.archlinux.org/title/Tmpfs https://wiki.gentoo.org/wiki/Tmpfs
However, I think these are the wrong knobs to turn :-)
PSD can make use of overlayfs, which saves a little bit of used RAM, and is faster to initially sync, but uses more disk space then. But not that much. Which can be further minimized by the number of kept profile backups, or not using backups at all.
Just keep your profile lean and mean, then there's less stuff to shuffle around. Since your'e using Vivaldi, pointing you to wikipages how to make Firefox use less RAM seems pointless ;)
Maybe (carefully) use something like https://www.bleachbit.org for cleaning up Vivalidis profile.
Which leaves things like
https://en.wikipedia.org/wiki/Zram https://wiki.archlinux.org/title/Zram https://wiki.gentoo.org/wiki/Zram https://www.kernel.org/doc/html/latest/admin-guide/blockdev/...
OR
https://en.wikipedia.org/wiki/Zswap https://wiki.archlinux.org/title/Zswap https://wiki.gentoo.org/wiki/Zswap https://docs.kernel.org/admin-guide/mm/zswap.html
to consider.
While it may seem insane to reserve already limited RAM for just another thing, these are worth it. If configured right. I used them, or their predecessors since olden times, when I've just had a Thinkpad T60p with some Centrino and only 4GB.
That made things better in general. Of course it's no silver bullet for everything, but it made the system less sluggish, and it took longer to slow down because of being 'swapped to death'.
From then on I continued to use stuff like that.
On a system with only 8GB, too. ZRAM in this case, because backing device like zswap was impossible, because the HDD blew. So I booted live from USB(2(Arrgh!)) and ran from RAM.
By means of AntiX, which btw. showed my the ways sysctls regarding swappiness, pressure stall information, and related stuff can totally change the behaviour of a system.
Even if it looks strange/ghettoish at first, which can be remastered away easily anyways, the devs really know how to get the most out of older systems with limited RAM and power, in interesting ways. Should be looked at, even if only for 'inspiration', technically.
For instance making things like Firefox shrink back, after having closed too much tabs. And remaining usable, while doing so.
Anyway. Depending on what you do, 8GB only can go a looong way, if configured/used right.
IMO not using ZRAM/ZSWAP, sysctls for swappiness, PSI, etc. is wrong and wasteful.
PSD is just a little icing on the cake.
Where measure can be anything from htop/atop over https://github.com/cdown/psi-notify , to https://github.com/noiseonwires/memory-pressure and so on.
To understand what these sysctls are about, how they are related to each other, and to ZRAM, I'd recommend reading, or at least skimming https://chrisdown.name/2018/01/02/in-defence-of-swap.html , https://chrisdown.name/2019/07/18/linux-memory-management-at... , https://linuxblog.io/linux-performance-almost-always-add-swa... , https://linuxblog.io/linux-performance-almost-always-add-swa... , https://linuxblog.io/linux-performance-no-swap-space , https://linuxblog.io/running-out-of-ram-linux-add-zram , https://github.com/ValveSoftware/SteamOS/issues/899 , https://lonesysadmin.net/2013/12/22/better-linux-disk-cachin... , https://github.com/CachyOS/CachyOS-Settings/pull/19 , https://github.com/CachyOS/CachyOS-Settings , https://docs.kernel.org/admin-guide/sysctl/vm.html , https://docs.kernel.org/accounting/psi.html , https://facebookmicrosites.github.io/psi/docs/overview , https://unixism.net/2019/08/linux-pressure-stall-information...
even if it seems redundant, OFC skim and discard anything you already know(But I can't know that).
The following not to use them, but for their POVs/concepts regarding the same problem: https://github.com/facebookincubator/oomd , https://github.com/rfjakob/earlyoom , https://github.com/hakavlad/nohang
These, to maybe take inspiration from, to adjust dynamically:
https://www.linuxbash.sh/post/tune-vmswappiness-in-a-script-... , https://github.com/lululoid/LMKD-PSI-Activator (Yes, Android, I know, but still...)
Then there is the whole thing about eBPF, which opens up much more screws to turn dynamically according to demand, according to whichever policies.
https://www.brendangregg.com/ebpf.html / https://deepwiki.com/oracle/bpftune/1-overview
Have fun :)
The "CachyOS" page was deleted[1], and replaced with a redirect to the Arch Linux page. But CachyOS is not mentioned anywhere on that page, nor on the "List of Linux distributions § Arch Linux-based" page.
[1]: https://en.wikipedia.org/wiki/Wikipedia:Articles_for_deletio...
There are a lot of derivative(I don't mean it in a negative way) distors out there, not sure if they all need pages.
There is a reason Ubuntu is usually the first distro new Linux users go to. For almost a decade now, installing a feature-complete Ubuntu setup is not much more difficult than reimaging Windows.
Personally I've been running Arch on my work machine for a few years now with very few issues. I'm not even very consistent with updates, and probably run them about once every 3 weeks on average. I have only had to manually intervene on a handful of occasions.
I like it a lot because everything is always up-to-date. I don't face any issues with unsupported versions for tools like I have with Debian in the past. The rolling release model also saves me the pain of doing a "hard" OS upgrade, which often come with issues.
Eventually, I got more busy and had less time to tinker, so I migrated to Ubuntu LTS, which has some small warts, but has needed practically null babysitting compared to Arch. I was surprised when the Arch memes resurfaced this year, but that's the only growth I've seen. None of my Linux-savvy peers use Arch, BTW.
I've had way more problems with Ubuntu trying to be convenient and bringing in lots of Windows-style automation that breaks more often than it works (and when that happens, you're really on your own since you have no idea how it's put together — just like in Windows).
Or even just bugs that were solved upstream ages ago (and have been available in every rolling-release distribution, including Debian testing/sid).
The current Arch installer suggests btrfs with snapper, so you get automatic snapshots pretty much out of the box (need to check one flag in installer), and can easily rollback if something breaks. Not something I needed, but it's there.
At least this guy has been using it as a daily driver (at home and at work) for at least fifteen years.
I switched away from Arch (to Ubuntu) as a sort of side effect of switching computers a couple years ago (desktop->laptop, though Ubuntu would “bring the batteries along” more conveniently). Ubuntu is fine I guess, but I really miss the stability of rolling release and the user-friendliness of not having too many built in programs.
Then had to use something 'officially' supported for a while, then did some Debian derivative live-distro running from USB/in RAM because of HW-problems, and settled for CachyOS when new (old) HW arrived.
I update maybe once a month at the most, more likely every two monts, because I don't give a shit. With the exception of FF, or maybe some nicer Kernel, for eBPF and scheduler-stuff.
That's reviewing changes in a few config files, after having read up about them at Archs & CachyOS sites. Maybe five minutes max, opening a few relevant tabs. (If necessary at all, which often isn't the case.)
Starting Pacman. Downloads instantly, even if several GB. Decompresses and installs stuff. Maybe two to three minutes. Reboot. 20 seconds. Plasma is back.
Clicking FF. Back with all its tabs. Maybe two to three seconds. Maybe uBO blocks a few more secs sometimes, while updating lists.
After intentionally having killed it with -9 in preparation before reboot.
Cleaning Pacman's package-cache and btrfs-snapshots because The only way is Fooorwaaard!
( https://www.youtube.com/watch?v=e_tVzx_PIH8 Daxon ft. Numa - The Only Way (Extended Mix) [COLDHARBOUR RECORDINGS] 7mins, 7secs )
Letting btrfs rebalance in the background.
Opening other stuff, on other virtual desktops, being exactly where and how I left it, thanks to working session-mgmt.
Feels very convenient to me, in opposition to most of the other 'mainstream stuff'.
Maybe the memes have a core of truth to them? For ppl who know what they do?
Cachy, Cachy, Caramba, Yay, Yay!
Citation needed.
I'm not sure that's true. Neither I nor most people I know who use Arch (granted, most of them are professional software developers) install software from the internet willy-nilly and without reviewing anything, if by AUR or "curl | bash", especially when on their main computers.
Archlinux is a distro that’s designed for the user to control their own system, and the AUR is clear about what it is and the nature of the packages in it.
> Warning: AUR packages are user-produced content. These PKGBUILDs are completely unofficial and have not been thoroughly vetted. Any use of the provided files is at your own risk.
This is from https://wiki.archlinux.org/title/Arch_User_Repository.
> Warning: AUR helpers are not supported by Arch Linux. You should become familiar with the manual build process in order to be prepared to troubleshoot problems.
This is from https://wiki.archlinux.org/title/AUR_helpers.
"yay" is one of the most common AUR helpers, it requires two confirmations from what I counted. One of them is to inspect the PKGBUILD file, the other one is just to proceed.
But, maybe it would be best not to have “yay” available. Using something like AUR without reading the package build files is… pretty bad, right? And it is bad for the community, because if there is a convention of doing that sort of thing, it makes the AUR a good target for attacking.
Yay itself is in the AUR. You have to go out of your way to install it.
The Archlinux docs on AUR helpers lead with a red warning: https://wiki.archlinux.org/title/AUR_helpers
I don't remember how yay works but paru (another AUR package manager) displays the pkgbuild file before it will install.
Even if you're using an immutable distro, your KDE Plasma session can get hijacked if you simply use the built in wizard to install 3rd party desktop widgets, which is a right-click + single-click away on any Plasma destkop.
IIRC, the post was just a single paragraph, praising how they “found” the zen-browser-patched-bin package on the AUR and how much it helped them.
[0]: https://www.reddit.com/r/archlinux/comments/1m30py8/aur_is_s...
[1]https://web.archive.org/web/20250718140411/https://aur.archl...
They have to be installed via "pacman -U package_file"
Arch developers can code "pacman -U" such that it performs a VirusTotal scan before installation for each package.
VirusTotal's API is free.
- https://docs.virustotal.com/docs/api-scripts-and-client-libr... - https://docs.virustotal.com/docs/please-give-me-an-api-key - https://docs.virustotal.com/docs/consumption-quotas-handled
Since it is end users who are doing the upload and virus scan check, there won't be a consumption quota issue with VirusToal.
Lastly, "pacman -U" should flag failed VirusTotal scans to Arch Security.
Arch's pacman and Flathub's flatpak package managers should be the last line of defence when installing untrusted packages by end users.
Pacman cannot be used to download, compile, or install AUR packages. You need the PKGBUILD file and use "makepkg -si" at the very least. If you want AUR packages, you'd install a package manager (in this context referred to as AUR helper) like "yay" that supports both official and unofficial (i.e. AUR) packages. FWIW AUR helpers are not even official packages, not even "yay" which is a popular one. You need to go out of your way to install "yay" (although it is one command away before, i.e. very easy).
TL;DR: Pacman does not download, compile, or install packages from the AUR, nor does it resolve their dependencies. "makepkg -si" builds and installs a package based on the PKGBUILD file, or use an AUR helper that overcomes the limitations of "makepkg". AUR helpers make it easy to install AUR (i.e. unofficial) packages.
This is a situation where you have to go out of your way and be naive to be affected. You simply can't protect the user from everything.
But more importantly this is a terrible idea in regards to privacy/infosec. I do not want packages I build and install myself to be uploaded to a 3rd party website.
And for what benefit? 99% of new malware won't be detected anyway, and once it is known it is way more effective to just remove the offending package from the AUR.
To ensure reproducible / clean builds, I thought makepkg would always be run in a sandbox/chroot environment. The damage done would be localised to that sandbox.
> this is a terrible idea in regards to privacy/infosec.
Ok. Devs could setup an option to pacman -U which allows it to bypass VT for privacy sensitive people. This just puts the onus on you to not ensure you aren't installing malware. The default Arch user should still be protected while allowing for your privacy needs.
> 99% of new malware won't be detected anyway, and once it is known it is way more effective to just remove the offending package from the AUR
Its too late then. People are already affected.
No, makepkg doesn’t run in a sandbox. The system tries to stop you from running it as root, but otherwise all validation of the trustworthiness of the pkgbuild and any sandboxing of the build process are left up to the user. This is part of why pacman, the 1st party package manager, does not fetch from the AUR.
Likewise, it would be generally against the Arch ethos to have the default behavior of the package manager interact with a 3rd party service. If a user wants that action, they’d need to perform it themselves.
makepkg runs in a fakeroot environment, but this is not a security barrier. There is also support for building inside systemd containers, offering at least limited security, but most AUR helpers don't use that yet.
> Ok. Devs could setup an option to pacman -U which allows it to bypass VT for privacy sensitive people. This just puts the onus on you to not ensure you aren't installing malware. The default Arch user should still be protected while allowing for your privacy needs.
You mistake the target group of Arch Linux. Users are expected to read the documentation and to know what they're doing. Protecting users from themselves at the expense of those who know what they're doing is not what Arch is about.
> Its too late then. People are already affected.
That doesn't make sense, it's too late for people if new malware isn't detected by VirusTotal as well.
Goes against the very nature of the distro. I very rarely see assumed defaults in Arch, and they are almost always opt-in. Mind you, you need community provided helpers to automate AUR building, its that barebones and I'm sure there are people who manually build / use custom scripts to build every package.
AFAIK, VirusTotal only flags known malware/viruses, any new/"looks-to-be-new" stuff wouldn't be flagged until they've picked it up, and once someone would have picked it up, it should be removed from the AUR anyways. So you'd have at least one user (most likely more) getting infected first, and once detected more users wouldn't be able to install it regardless.
This is where your and my intentions differ. I don't want the average Arch user to be infected when it can be prevented because the malware is known about.
Me neither, my argument would be that VirusTotal won't stop the initial users from getting infected, so not good enough in my mind.
and official repo does not have enough packages to run arch :\ I don't want to go back to ubuntu
It'd be nice to test it with a sample of aur package/malware.
PS. Regarding downloading files from internet, every self-update tool does that nowadays, it becomes more common because of apple/others stores policies. I created a few remote control tools and it is very very difficult to caught them, and I am not even a professional malware researcher. Things they do is beyond understanding of average superuser
Running random binaries on your computer uploaded by some anonymous dude has to be the equivalent of buying heart medicine on craigslist. And because Arch is so barebones to begin with the AUR is very popular, you see a lot of arch users using it.
Not a single enterprise distro even reacts within that timeframe. OVAL advisories are weeks, sometimes months later.
As long as you don't have a virtualization approach similar to QubesOS, any linux distro will not fix this problem. Because that's not how separation of concerns works in the POSIX system. You need to have separate users for each and every program to isolate them, and that is practically unfeasible.
Pretty much every browser that isn't Firefox including Chrome, VS Code, most proprietary software like Slack, Zoom, Spotify, many vpn clients and password managers, a lot of them seemingly not published by the companies in question.
All of those ancillary password, vpn or security related products who aren't going to be in the main repo because they have proprietary elements and also rely on random people seems particularly bad. And there's a lot of software in that category.
That's what Flatpak is for. If you must install crappy proprietary software, at least get an official package from the developer.
nix, which has its own share of problems.
Care to elaborate?
Yes, proprietary software has to be installed separately, but for things like cloud password managers you're already putting your trust someplace else. You're also not likely to be hit by out of these flyby attacks, because the stuff people want is popular and has people watching it constantly and reputable people maintaining it. These patch/fix packages are suspicious looking and probably didn't have a single person touch them.
Most aren't, but it's trivial to review changes to packages (all good AUR helpers show the diff on upgrades, an 99% of time the changes are hash and version, nothing else).
So you only need to check the package once, which the documentation reminds you to do about fifty times. Otherwise — play stupid games, win stupid prizes.
If the package has any popularity at all, you will get lots of paranoid users who will eat you alive and report to Arch maintainers right away if you do anything suspicious, try to link a binary from some weird website instead of the upstream URL, or even just omit the GPG signature verification key when it's available.
AUR helpers make reviewing changes to AUR packages a trivial matter that takes about 2 minutes of my life per month. In exchange I get easy access to software that isn't packaged for Ubuntu and probably never will be, because building debs and going through the process of upstreaming them is roughly comparable to getting a PhD (if anyone is even interested in your debs, which they probably won't be).
This makes me nervous. I guess it’s time to do some audits.
My impression is that the malice was spotted timely, and not many people were affected. Which is a pretty good thing!
All decent AUR helpers (which arch developers advise against using anyway) force you to read through the packaging script and confirm that you understand it and are fine with what's about to be executed.
It's no more of an issue than someone posting a malware script into e.g. the wiki. Much less obscure than malware in npm or anything like that.
Yes, the AUR is user-provided content. Yes, system administrators are responsible for being aware of what they’re installing. You can find many comments from me on this page discussing that.
An attacker being detected using an official service hosted by Archlinux for user-managed packages to push malware is still noteworthy.
When installing softwares on arch Linux, first searching for official packages provided by Arch Linux maintainers, then official installation methods approved by authors of the software, or AURs which do the installation in the exact way as the authors of the software describe.
A search on the default installation method of Firefox and librewolf package on arch Linux is listed below.
If AUR is required to install a package, note that AURs are not trusted by default because not all AURs are not maintained by trusted users. Always check the source file and the installation method documented in PKGBUILD. Don't do the installation until EVERY line in the PKGBUILD is reasonable.
jabjq•6mo ago
How are they supposed to do that when you give them no information as to what the malware does?
rwmj•6mo ago
More interesting questions are:
- Who was the uploader? A packager? For how long?
- Do they maintain other packages?
- What steps can be taken to ensure that a similar problem doesn't happen in future?
gpm•6mo ago
The AUR is arch's repository of untrusted user maintained read-the-source-before-installing packages. There's really not much that can be done to prevent similar issues in the future... because the whole purpose of the AUR is to allow random people to upload packages.
Arch doesn't ship with any way to install AUR packages other than downloading the tarball and building them locally. Tools for installing the packages usually force you to read the PKGBUILD that controls the build process (including getting sources) before letting you build the packages. I.e. the reasonable steps have already been taken.
Edit: firefox-patch-bin was first submitted to the AUR 2025-07-16 21:33 (UTC), so less than two days before removal.
amy214•6mo ago
I mean... ... if this was a malicious actor who is to say they don't have 15 aliases on 5 linux distros
diggan•6mo ago
With that comes the same warning as downloading random stuff from the internet and executing it, you need to carefully review everything before running/installing it, as you're basically doing a fancy version of "curl | bash" when using the AUR.
gpm•6mo ago
The malware operator could have done anything with that access... There's no way for the maintainers to know what was done on any given infected machine.
sp0rk•6mo ago
akazantsev•6mo ago
Also, an attacker may leave no traces by simply dumping the payload to /tmp.
gpm•6mo ago
Assuming the malware doesn't clean up after itself, `pacman -Q firefox-patch-bin librewolf-fix-bin zen-browser-patched-bin` would tell you if they are installed... but if it did clean up after itself... how are the maintainers supposed to know what steps were taken to clean up given that it's a rat that could be running different steps on different computers...
Shellban•6mo ago
gpm•6mo ago
That said, if you did, yeah being hacked is scary and I feel for you.
johnisgood•6mo ago
Shellban•6mo ago
johnisgood•6mo ago
https://aur.archlinux.org/packages/librewolf-bin#comment-103...
Shellban•6mo ago
akerl_•6mo ago
Shellban•6mo ago
nulld3v•6mo ago
- librewolf-fix-bin
- firefox-patch-bin
- zen-browser-patched-bin
The packages were only available for download for 3 days, and the only way you could have installed them is if you explicitly typed one of the package names into your terminal within those 3 days.
Did you do that? If no, then you are not compromised.
pndy•6mo ago
Ancapistani•6mo ago
johnisgood•6mo ago
Ancapistani•6mo ago
My desktop OS is much less of a concern now, so I mostly use macOS. It provides a decent shell and otherwise stays out of my way. I use Windows for gaming.
npteljes•6mo ago
michaelmrose•6mo ago