It crashed NON-STOP. And it would not remember my profile when I shut down, which made the crashes even worse, since I lost anything I was working on.
I finally figured out the problem, switched to 64 bit and it was like magic: Firefox actually worked again.
Linux distros don't tend to be as obsessive at maintaining full 32 bit support as the Windows world.
A better test would be to fire up a 32 bit VM and see if Firefox 32 bit crashed there...
I didn't change libraries, it was a gradual switch where you convert applications to 64 bit - and I didn't think to do Firefox, but it wasn't missing 32 bit libraries.
It was simply a profile that I'd been continuously using since approx 2004, and it was probably too large to fit in 32 memory anymore, or maybe Firefox itself needed more memory and couldn't map it. (The system had a 64 bit kernel, so it wasn't low on RAM), but 32 bit apps are limited to 2/3/4GB.
Firefox does have problems with old profiles, though. I could easily see crud building up there. I don't think Firefox is very good about clearing it out (unless you do the refresh profile thing). You could maybe diagnose it with about:memory, if you were still running that configuration.
(I'm no longer running it, I switched to 64 bit and was VERY happy to no longer have crashes.)
I technically could re-install the 32 bit one, and try it, but honestly, I don't really want to!
From when I used to work on performance and reliability at Mozilla, these types of user specific crashes were often caused by faulty system library or anti-virus like software doing unstable injections/hooks. Any kind of frequent crashes were also easier to reproduce and as a result fix.
(I happened to have an un-subitted one which I just submitted, all the other ones I submitted are older than 6 months and have been purged.)
It would crash in random spots, but usually some kind of X, GLX, EGL or similar library.
But I don't think it was GLX, etc, because it also didn't save my profile except very very rarely, which was actually a much worse problem!!
(This crash is from a long time ago, hence the old Firefox version.)
Also, I tried disabling EGL in the 32 bit with no effect on the crashing.
It's fine to keep hosting the older versions for download, and pointing users to it if they need it. But other than that, I see 0 reason to be putting in literally any effort at all to support 32-bit. It's ancient and people moved on like what, well over a decade and a half ago?
If I were in charge I'd have dropped active development for it probably 10 years ago.
That's nice... When this was originally posted on 09-05 it just mentioned "32-bit support", so I'd been worried this would be the end of me using FF on a Microsoft Surface RT (armv7, running Linux).
And doing security updates on ESR for a year is decent. (Though people using non-ESR stream builds of Firefox will much sooner have to downgrade to ESR, or be running with known vulnerabilities.)
If it turns out there's a significant number of people who really want Firefox on 32-bit x86, would it be viable for non-Mozilla volunteers to fork the current ESR or main stream, do bugfixes, backport security fixes, and distribute that unofficial or rebranded build?
What about volunteers trying to keep the main stream development backported? Or is that likely to become prohibitively hard at some point? (And if likely to become too hard, is it better to use that as a baseline going forward with maintenance, or to use the ESR as that baseline?)
Distro | Release | Support | Extended Support
-------------|---------|---------|------------------
SLES 11 | 2009-03 | 2019-03 | 2022-03 | 2028-03
RHEL 6 | 2010-11 | 2019-08 | 2024-06 | 2029-05
Arch | 2017-11 | *Ongoing releases via unofficial community project
Ubuntu 18.04 | 2018-04 | 2023-05 | 2028-04 | 2030-04
Fedora 31 | 2019-10 | 2020-11 | N/A
Slackware 15 | 2022-02 | Ongoing, this is the most recent release
Debian 12 | 2023-06 | 2026-06 | 2028-06
Gentoo | Ongoing
By the time FireFox 32-bit is dropped, all the versioned distros will be past their general support date and into extended support, leaving Gentoo, Arch32, and a handful of smaller distros. Of course, there are also folks running a 64-bit kernel with 32-bit Firefox to save memory.
e2le•4d ago
Maybe they could also drop support for older x86_64 CPU's, releasing more optimised builds. Most Linux distributions are increasing their baseline to x84-64-v2 or higher, most Firefox users (>90%)[0] seem to meet at least x84-64-v2 requirements.
[0]: https://firefoxgraphics.github.io/telemetry/#view=system
[1]: https://firefoxgraphics.github.io/telemetry/#view=general
kstrauser•1d ago
Wowfunhappy•1d ago
As long as you don’t open a million tabs and aren’t expecting to edit complex Figma projects, I’d expect browsing the web with a Pentium + a lightweight distro to be mostly fine.
Idk, I think this is sad. Reviving old hardware has long been one thing Linux is really great at.
doubled112•1d ago
My wife had an HP Stream thing with an Intel N3060 CPU and 4GB of RAM. I warned her but it was cheap enough it almost got the job done.
Gmail's web interface would take almost a minute to load. It uses about 500MB of RAM by itself running Chrome.
Does browsing the web include checking your email? Not if you need web mail, apparently.
Check out the memory usage for yourself one of these days on the things you use daily. Could you still do them?
zozbot234•23h ago
kstrauser•23h ago
zozbot234•23h ago
anthk•22h ago
4GB of RAM should be more than enough.
Wowfunhappy•7h ago
mschuster91•23h ago
Got the Thinkpad for half the ebay value on a hamfest. Made in 2018-ish, i5-8350U CPU... It's a nice thing, the form factor is awesome and so is the built-in LTE modem. The problem is, more than a dozen Chrome tabs and it slows to a crawl. Even the prior work machine, a 2015 MBP, performed better.
And yes you absolutely need a beefy CPU for a news site. Just look at Süddeutsche Zeitung, a reputable newspaper. 178 requests, 1.9 MB, 33 seconds load time. And almost all of that crap is some sort of advertising - and that despite me being an actually subscribed customer with adblock enabled on top of that.
anthk•22h ago
- Ublock origin instead of AdBlock
- git clone git://bitreich.org/privacy-haters
Altough this is HN; so I would just suggest disabling JS under the UBo settings and enabling the advanced settings under Ubo. Now, click on the UBo origin and mark the 3rd party scripts and such in red; and leave out the 1st party images/requests and enabled. Then, starting accepting newspapers' domains and CDN's until it works. The CPU usage will plummet down.cosmic_cheese•1d ago
More generally I feel that Core 2 serves as a pretty good line in the sand across the board. It’s not too hard to make machines of that vintage useful, but becomes progressively challenging with anything older.
selectodude•1d ago
kstrauser•23h ago
Frankly, anything older than that sucks so much power per unit of work that I wouldn’t want to use them for anything other than a space heater.
Sohcahtoa82•23h ago
cosmic_cheese•21h ago
anthk•22h ago
Go try browsing the web without UBlock Origin today under an i3.
pigeons•1d ago
metalliqaz•1d ago
kstrauser•1d ago
FirmwareBurner•1d ago
Mate, 20 year old system means a Pentium 4 Prescott and Athlon 64, both of which had 64 bit support. And in another year after we already had dual core 64 bit CPUs.
So if you're stuck on 32 bit CPUs then your system is even older than 20 years.
kstrauser•23h ago
So you could very well have bought a decent quality 32 bit system after 2005, although the writing was on the wall long before then.
zokier•23h ago
FirmwareBurner•23h ago
Not really. With the launch of Athlon 64, AMD basically replaced all their 32bit CPUs lineups with that new arch, and not kept them along much longer as a lower tier part. By 2005 I expect 90% of new PCs sold were already 64 bit ready.
kstrauser•22h ago
FirmwareBurner•10h ago
axiolite•21h ago
You're several years off:
"The FIRST processor to implement Intel 64 was the multi-socket processor Xeon code-named Nocona in June 2004. In contrast, the initial Prescott chips (February 2004) did not enable this feature."
"The first Intel mobile processor implementing Intel 64 is the Merom version of the Core 2 processor, which was released on July 27, 2006. None of Intel's earlier notebook CPUs (Core Duo, Pentium M, Celeron M, Mobile Pentium 4) implement Intel 64."
https://en.wikipedia.org/wiki/X86-64#Intel_64
"2012: Intel themselves are limiting the functionality of the Cedar-Trail Atom CPUs to 32bit only"
https://forums.tomshardware.com/threads/no-emt64-on-intel-at...
Intel had 80% of the CPU market at the time.
pdntspa•1d ago
FirmwareBurner•1d ago
Nah mate, something doesn't add up. I can't buy this. Even the cheapest Atoms had 64bit support much earlier than that and Atoms were lower tier silicone than Celeron so you can't tell me Intel had brand new 32 bit only Celerons in 2019.
My Google-fu found the last 32-bit only chips intel shipped were the Intel Quark embedded SoCs EoL in 2015. So what you're saying doesn't pass the smell test.
pdntspa•1d ago
FirmwareBurner•1d ago
pdntspa•1d ago
My point is this stuff is still in play in a lot of places.
FirmwareBurner•1d ago
So when you tell me "brand new 32 bit Celeron" it is understood as "just came onto the market".
Am I right or wrong with this understanding?
>My point is this stuff is still in play in a lot of places.
I spent ~15 years in embedded and can't concur on the "still in play in a lot of places" part, but I'm not denying some users can't still exists out there, however I'm sure we can probably count them on very few fingers since Intel's 32 bit Embedded chips never had much traction to begin with.
pdntspa•1d ago
https://www.merriam-webster.com/dictionary/brand-new
mpol•1d ago
gwbas1c•23h ago
As in, a product that was manufactured, kept in its original packaging, and "unopened and unused".
(Although there's some allowances for the vendor to test because you don't want to buy something DOA.)
(Although I won't get too angry for someone saying "brand new." "New old stock" is kind of an obscure term that you don't come across unless you're the kind of person who cares about that kind of thing.)
waterhouse•1d ago
rolph•1d ago
pigeons•1d ago
lou1306•1d ago
nicoburns•23h ago
They will be in dire straights if the Google money goes away for some reason, but right now they have plenty of money.
(not that I think it makes any sense for them to maintain support for 32-bit cpus)
hulitu•8h ago
Last i checked, Mozilla was an ad company with Google as the main "donor".
pavon•1d ago
zozbot234•23h ago
gweinberg•23h ago
yreg•23h ago
ryan-ca•23h ago
01HNNWZ0MV43FF•22h ago
These things that look like institutions, that look like bricks carved from granite, are just spinning plates that have been spinning for a few years.
When I fight glibc dependency hell across Ubuntu 22 and Ubuntu 24, I sympathize with Firefox choosing to spin the 64-bit plates and not the 32-bit plates.
kstrauser•22h ago
Employees: “We want to use new feature X.”
Boss: “Sorry, but that isn’t available for our wealthy customers who are stuck on Eee PCs.”
Nah.
darkmighty•1d ago
Question: Don't optimizers support multiple ISA versions, similar to web polyfill, and run the appropriate instructions at runtime? I suppose the runtime checks have some cost. At least I don't think I've ever run anything that errored out due to specific missing instructions.
igrunert•1d ago
The last processor without the CMPXCHG16B instruction was released in 2006 so far as I can tell. Windows 8.1 64-bit had a hard requirement on the CMPXCHG16B instruction, and that was released in 2013 (and is no longer supported as of 2023). At minimum Firefox should be building with -mcx16 for the Windows builds - it's a hard requirement for the underlying operating system anyway.
enedil•1d ago
crest•1d ago
PhilipRoman•22h ago
mort96•1d ago
You could also have only one implementation of strcpy and use no exotic instructions. That would also be faster for small inputs, for the same reasons.
Having multiple implementations of strcpy selected at runtime optimizes for a combination of binary portability between different CPUs and for performance on long input, at the cost of performance for short inputs. Maybe this makes sense for strcpy, but it doesn't make sense for all functions.
duped•1d ago
You can't really state this with any degree of certainty when talking about whole-program optimization and function inlining. Even with LTO today you're talking 2-3% overall improvement in execution time, without getting into the tradeoffs.
mort96•23h ago
> Even with LTO today you're talking 2-3% overall improvement in execution time
Is this comparing inlining vs no inlining or LTO vs no LTO?
In any case, I didn't mean to imply that the difference is large. We're literally talking about a couple clock cycles at most per call to strcpy.
duped•14h ago
And at runtime, there is no meaningful difference between strcpy being linked at runtime or ahead of time. libc symbols get loaded first by the loader and after relocation the instruction sequence is identical to the statically linked binary. There is a tiny difference in startup time but it's negligible.
Essentially the C compilation and linkage model makes it impossible for functions like strcpy to be optimized beyond the point of a function call. The compiler often has exceptions for hot stdlib functions (like memcpy, strcpy, and friends) where it will emit an optimized sequence for the target but this is the exception that proves the rule. In practice, the benefits of statically linking in dependencies (like you're talking about) does not have a meaningful performance benefit in my experience.
(*) strcpy is weird, like many libc functions its accessible via __builtin_strcpy in gcc which may (but probably won't) emit a different sequence of instructions than the call to libc. I say "probably" because there are semantics undefined by the C standard that the compiler cannot reason about but the linker must support, like preloads and injection. In these cases symbols cannot be inlined, because it would break the ability of someone to inject a replacement for the symbol at runtime.
mort96•11h ago
Repeating the part of my post that you took issue with:
> If there was only one implementation of strcpy and it was the version that happens to be picked on my particular computer, and that implementation was in a header so that it could be inlined by my compiler, my programs would execute faster.
So no, I'm not talking about LTO. I'm talking about a hypothetical alternate reality where strcpy is in a glibc header so that the compiler can inline it.
There are reasons why strcpy can't be in a header, and the primary technical one is that glibc wants the linker to pick between many different implementations of strcpy based on processor capabilities. I'm discussing the loss of inlining as a cost of having many different implementations picked at dynamic link time.
ChocolateGod•1d ago
mort96•1d ago
Loudergood•1d ago
zokier•1d ago
https://news.ycombinator.com/item?id=44884709
but generally it is rare to see higher than x86-64-v3 as a requirement, and that works with almost all CPUs sold in the past 10+ years (Atoms being prominent exception).
nisegami•1d ago
PhilipRoman•22h ago
mort96•1d ago
There are more relevant modern examples, but one example that I really think illustrates the issue well is floating point instructions. The x87 instruction set is the first set of floating point instructions for x86 processors, first introduced in the late 80s. In the late 90s/early 2000s, Intel released CPUs with the new SSE and SSE2 extensions, with a new approach to floating point (x87 was really designed for use with a separate floating point coprocessor, with a design that's unfortunate now that CPUs have native floating point support).
So modern compilers generate SSE instructions rather than the (now considered obsolete) x87 instructions when working with floating point. Trying to run a program compiled with a modern compiler on a CPU without SSE support will just crash with an illegal instruction exception.
There are two main ways we could imagine supporting x87-only CPUs while using SSE instructions on CPUs with SSE:
Every time the compiler wants to generate a floating point instruction (or sequence of floating point instructions), it could generate the x87 instruction(s), the SSE instruction(s), and a conditional branch to the right place based on SSE support. This would tank performance. Any performance saving you get from using an SSE instruction instead of an x87 instruction is probably going to be outweighed by the branch.
The other option is: you could generate one x87 version and one SSE version of every function which uses floats, and let the dynamic linker sort out function calls and pick the x87 version on old CPUs and the SSE version on new CPUs. This would more or less leave performance unaffected, but it would, in the worst case, almost double your code size (since you may end up with two versions of almost every function). And in fact, it's worse: the original SSE only supports 32-bit floats, while SSE2 supports 64-bit floats; so you want one version of every function which uses x87 for everything (for the really old CPUs), one version of every function which uses x87 for 64-bit floats and SSE for 32-bit floats, and you want one function which uses SSE and SSE2 for all floats. Oh, and SSE3 added some useful functions; so you want a fourth version of some functions where you can use instructions from SSE3, and use a slower fallback on systems without SSE3. Suddenly you're generating 4 versions of most functions. And this is only from SSE, without considering other axes along which CPUs differ.
You have to actively make a choice here about what to support. It doesn't make a sense to ship every possible permutation of every function, you'd end up with massive executables. You typically assume a baseline instruction set from some time in the past 20 years, so you're typically gonna let your compiler go wild with SSE/SSE2/SSE3/SSE4 instructions and let your program crash on the i486. For specific functions which get a particularly large speed-up from using something more exotic (say, AVX512), you can manually include one exotic version and one fallback version of that function.
But this causes the problem that most of your program is gonna get compiled against some baseline, and the more constrained that baseline is, the more CPUs you're gonna support, but the slower it's gonna run (though we're usually talking single-digit percents faster, not orders of magnitude faster).
wtallis•1d ago
snackbroken•1d ago
Ukv•1d ago
zamadatix•1d ago
arp242•1d ago
Probably less, not more. Many distros either stopped supporting 32bit systems, or are planning to. As the announcement says, that's why they're stopping support now.
3np•1d ago
Kiosks and desktops and whatnot on Raspis still on 32-bit and likely to have Firefox without telemetry.
csande17•1d ago
3np•20h ago
xorcist•1d ago
Debian supports MIPS and SPARC still. Last I checked OpenSSL is kept buildable on OpenVMS. Surely there must be a handful of people out there who cares about good old x86?
If your numbers are correct, there are millions if not tens of millions of Firefox users on 32-bit. If none of them are willing to keep Firefox buildable, there must be something more to it.
dtech•1d ago
padenot•1d ago
We've carefully ran some numbers before doing this, and this affects a few hundreds to a few thousand people (hard to say, ballpark), and most of those people are on 64bits CPUs, but are using a 32bits Firefox or 32bits userspace.
The comparatively high ratio of 32bits users on Windows is not naively applicable to the Linux Desktop population, that has migrated ages ago.
xorcist•22h ago
That's the specific meaning of support that it was my intention to point out. Free software projects usually do not "support" software in the commercial sense, but consider platforms supported when there are enough persons to keep the build alive and up to date with the changing build requirements etc. It was my expectation that Firefox was more like free software project than a commercial product, but perhaps that is not the case?
Commercial products have to care about not spreading their resources thin, but for open source cause and effect are the other way around: The resources available is usually the incoming paramter that decides what is possible to support. Hence my surprise that not enough people are willing to support a platform that has thousands of users and isn't particularly exotic, especially compared to what mainstream distributions like Debian already build.
anthk•3h ago
postepowanieadm•23h ago
RainyDayTmrw•22h ago
bmicraft•22h ago
ars•22h ago
Mozilla should try to automate this switch where the system is compatible to it.