frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Queueing Theory v2: DORA metrics, queue-of-queues, success-failure-skip notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•1m ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
1•o8vm•3m ago•0 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•4m ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•17m ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•20m ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
1•helloplanets•22m ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•30m ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•32m ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•33m ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•34m ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
1•basilikum•36m ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•37m ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•41m ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
3•throwaw12•43m ago•1 comments

Show HN: MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•43m ago•2 comments

Show HN: Omni-BLAS – 4x faster matrix multiplication via Monte Carlo sampling

https://github.com/AleatorAI/OMNI-BLAS
1•LowSpecEng•44m ago•1 comments

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

https://codemanship.wordpress.com/2026/01/05/the-ai-ready-software-developer-conclusion-same-game...
1•lifeisstillgood•46m ago•0 comments

AI Agent Automates Google Stock Analysis from Financial Reports

https://pardusai.org/view/54c6646b9e273bbe103b76256a91a7f30da624062a8a6eeb16febfe403efd078
1•JasonHEIN•49m ago•0 comments

Voxtral Realtime 4B Pure C Implementation

https://github.com/antirez/voxtral.c
2•andreabat•52m ago•1 comments

I Was Trapped in Chinese Mafia Crypto Slavery [video]

https://www.youtube.com/watch?v=zOcNaWmmn0A
2•mgh2•58m ago•0 comments

U.S. CBP Reported Employee Arrests (FY2020 – FYTD)

https://www.cbp.gov/newsroom/stats/reported-employee-arrests
1•ludicrousdispla•59m ago•0 comments

Show HN: I built a free UCP checker – see if AI agents can find your store

https://ucphub.ai/ucp-store-check/
2•vladeta•1h ago•1 comments

Show HN: SVGV – A Real-Time Vector Video Format for Budget Hardware

https://github.com/thealidev/VectorVision-SVGV
1•thealidev•1h ago•0 comments

Study of 150 developers shows AI generated code no harder to maintain long term

https://www.youtube.com/watch?v=b9EbCb5A408
1•lifeisstillgood•1h ago•0 comments

Spotify now requires premium accounts for developer mode API access

https://www.neowin.net/news/spotify-now-requires-premium-accounts-for-developer-mode-api-access/
1•bundie•1h ago•0 comments

When Albert Einstein Moved to Princeton

https://twitter.com/Math_files/status/2020017485815456224
1•keepamovin•1h ago•0 comments

Agents.md as a Dark Signal

https://joshmock.com/post/2026-agents-md-as-a-dark-signal/
2•birdculture•1h ago•0 comments

System time, clocks, and their syncing in macOS

https://eclecticlight.co/2025/05/21/system-time-clocks-and-their-syncing-in-macos/
1•fanf2•1h ago•0 comments

McCLIM and 7GUIs – Part 1: The Counter

https://turtleware.eu/posts/McCLIM-and-7GUIs---Part-1-The-Counter.html
2•ramenbytes•1h ago•0 comments

So whats the next word, then? Almost-no-math intro to transformer models

https://matthias-kainer.de/blog/posts/so-whats-the-next-word-then-/
1•oesimania•1h ago•0 comments
Open in hackernews

Debian switches to 64-bit time for everything

https://www.theregister.com/2025/07/25/y2k38_bug_debian/
418•pseudolus•6mo ago

Comments

pilif•6mo ago
"everything" for those values of "everything" that do not include one of the most (if not the most) widely used 32 bit architectures.

(snark aside: I understand the arguments for and against making the change of i386 and I think they did the right thing. It's just that I take slight issue with the headline)

pantalaimon•6mo ago
I doubt that i386 is still widely used. You are more likely to find embedded ARM32 devices running Linux, for x86 this is only the case in the retro computing community.
Ekaros•6mo ago
Intel Core 2 starting their 64bit CPUs is 20 years old next year. Athlon 64 is over 20 years old... I wonder truly how many real computers and not just VMs there is left.
pantalaimon•6mo ago
The later Prescott Pentium 4 was already supporting 64 bit, but Pentium M / first generation Atom did not.
pilif•6mo ago
I wasn't discounting VMs with my initial statement. I can totally imagine quite a few VMs still being around, either migrated from physical hardware or even set up fresh to conserve resources.

Plus, keeping i386 the same also means any still available support for running 32 bit binaries on 64 bit machines.

All of these cases (especially the installable 32 bit support) must be as big or bigger than the amount of ARM machines out there.

bobmcnamara•6mo ago
Opt in numbers here: https://popcon.debian.org/
pilif•6mo ago
that tells me my snark about i386 being the most commonly used 32 bit architecture wasn't too far off reality, doesn't it?
bobmcnamara•6mo ago
Indeed - i386 is certainly the most common 32-bit Debian platform.

Note also that the numbers are log-scale, so while it looks like Arm64 is a close third over all bitwidths, it isn't.

umanwizard•6mo ago
I'm actually amazed by this, I would have bet a lot on aarch64 being second.
bobmcnamara•6mo ago
Debian doesn't support Arm64 Mac...
umanwizard•6mo ago
You can run whatever distro you want in a VM, though. My daily driver is GuixSD in an aarch64 VM on a Mac. I wouldn’t recommend guixsd if you don’t love tinkering and troubleshooting, but otherwise the setup works fine.
axus•6mo ago
In the Linux binary context, do i386 and i686 mean the same thing? i686 seems relatively modern in comparison, even if it's 32-bit.
mananaysiempre•6mo ago
Few places still maintain genuine i386 support—I don’t believe the Linux kernel does, for example. There some important features it lacks, such as CMPXCHG. Nowadays Debian’s i386 is actually i686 (Pentium Pro), but apparently they’ve decided to introduce a new “i686” architecture label to denote a 32-bit x86 ABI with a 64-bit time_t.

Also, I’m sorry to have to tell you that the 80386 came out in 1985 (with the Compaq Deskpro 386 releasing in 1986) and the Pentium Pro in 1995. That is, i686 is three times closer to i386 than it is to now.

jart•6mo ago
8.8 percent of Firefox users have 32-bit systems. It's probably mostly people with a 32-bit install of Windows 7 rather than people who actually have an old 32-bit Intel chip like Prescott. Intel also sold 32-bit chips like Intel Quark inside Intel Galileo boards up until about 2020. https://data.firefox.com/dashboard/hardware

People still buy 16-bit i8086 and i80186 microprocessors too. Particularly for applications like defense, aerospace, and other critical systems where they need predictable timing, radiation hardening, and don't have the resources to get new designs verified. https://www.digikey.at/en/products/detail/rochester-electron...

wongarsu•6mo ago
On windows, encountering 32 bit software isn't all that rare. Running on 64 bit hardware on a 64bit OS, but that doesn't change that the 32bit software uses 32bit libraries and 32bit OS interfaces.

Linux is a lot more uniform in its software, but when emulating windows software you can't discount i386

pm215•6mo ago
It's actually still pretty heavily used in some niches, which mostly amount to "running legacy binaries on an x86-64 kernel". LWN had an article recently about the Fedora discussion on whether to drop i386 support (they decided to keep it): https://lwn.net/Articles/1026917/

One notable use case is Steam and running games under Wine -- there are apparently a lot of 32 bit games, including still some relatively recent releases.

Of course if your main use case for the architecture is "run legacy binaries" then an ABI change is probably inducing more pain than it seeks to solve, hence the exception of it from Debian's transition here.

zokier•6mo ago
i386 is not really properly supported arch for trixie anymore:

> From trixie, i386 is no longer supported as a regular architecture: there is no official kernel and no Debian installer for i386 systems.

> Users running i386 systems should not upgrade to trixie. Instead, Debian recommends either reinstalling them as amd64, where possible, or retiring the hardware.

https://www.debian.org/releases/trixie/release-notes/issues....

IsTom•6mo ago
> retiring the hardware.

Contrast of age of retired hardware with Windows 11 is a little funny.

account42•6mo ago
There is still a i386 user space for amd64 systems so this doesn't change anything.
pavon•6mo ago
Most production use of 32-bit x86, like industrial equipment controllers, and embedded boards support i686 these days, which is getting 64-bit time.
amelius•6mo ago
Can we also switch to unlimited/dynamic command line length?

I'm tired of "argument list too long" on my 96GB system.

loloquwowndueo•6mo ago
Ever heard of xargs?
amelius•6mo ago
Sure. But it's not the same thing (instead of invoking the command once it invokes a command multiple times) and a workaround at best. And ergonomics are not great. Especially if you first type the command without xargs, and then find out the argument list is too long and you'll have to reformulate it but now with xargs.
jcelerier•6mo ago
How does that help with a GCC command line?
tomsmeding•6mo ago
You may already know this, but GCC supports option files: see "@file" at the bottom of this page https://gcc.gnu.org/onlinedocs/gcc/Overall-Options.html .

This does not detract from the point that having to use this is inconvenient, but it's there as a workaround.

styanax•6mo ago
The ARGS_MAX (`getconf ARGS_MAX`) is defined at the OS level (glibc maybe? haven't looked); xargs will also be subject to it's limitation like all other processes. One can use:

    xargs --show-limits --no-run-if-empty </dev/null
...to see a nicely formatted output include the -2048 POSIX recommendation on a separate line.
jeroenhd•6mo ago
You can recompile your kernel to work around the 100k-ish command line length limit: https://stackoverflow.com/questions/33051108/how-to-get-arou...

However, that sounds like solving the wrong end of the problem to me. I don't really know what a 4k JPEG worth of command line arguments is even supposed to be used for.

tomrod•6mo ago
A huge security manifold to encourage adoption then sell white hat services on top?
dataflow•6mo ago
> I don't really know what a 4k JPEG worth of command line arguments is even supposed to be used for.

I didn't either, until I learned about compiler command line flags.

But also: the nice thing about conmand line flags is they aren't persisted anywhere (normally). That's good for security.

eightys3v3n•6mo ago
There were ways for other users on the system to see running process command line flags I thought. This isn't so good for security.
dataflow•6mo ago
That depends on your threat model.
Ghoelian•6mo ago
Pretty sure `.bash_history` includes arguments as well.
dataflow•6mo ago
That's only if you're launching from an interactive Bash? We're talking about subprocess launches in general.
johnisgood•6mo ago
If you add a whitespace before the command, it will not even get appended to history!
amelius•6mo ago
This is quite annoying behavior, actually.
nosrepa•6mo ago
Good thing you can turn it off!
amelius•6mo ago
Yeah, took me a while to figure that out though. Plus I lost my history.

Typing an extra space should not invoke advanced functionality. Bad design, etc.

zamadatix•6mo ago
I had never run across this and it didn't work for me when I tried it. After some reading it looks like the HISTCONTROL variable needs to be set to include "ignorespace" or "ignoreboth" (the other option included in this is "ignoredups").

This would be really killer if it was always enabled and the same across shells but "some shells support something akin and you have to check if it is actually enabled on the ones that do" is just annoying enough that I probably won't bother adopting this on my local machine even though it sounds convenient as a concept.

wongarsu•6mo ago
I'm mostly using debian and ubuntu flavors (both on desktop and the cloud images provided and customized by various cloud and hosting providers) and they have all had this as the default behavior for bash

YMMV with other shells and base distros

zamadatix•6mo ago
Proxmox (debian 12.x based) and Arch both didn't for me (both w/ bash). An Ubuntu 24.04 container in Proxmox did though.
wongarsu•6mo ago
tar cf metadata.tar.zstd *.json

in a large directory of image files with json sidecars

Somebody will say use a database, but when working for example with ML training data one label file per image is a common setup and what most tooling expects, and this extends further up the data preparation chain

NegativeK•6mo ago
I won't say use a database, but I will beg you to compress a parent directory instead of making a tar bomb.
mort96•6mo ago
That really just makes the problem worse: tar czf whatever.tgz whatever/*.json; you've added 9 bytes to every file path.

I mean I get that you're suggesting to provide only one directory on the argv. But it sucks that the above solution to add json files to an archive while ignoring non-json files only works below some not-insane number of files.

stouset•6mo ago
Having an insane number of files in the same directory is already a performance killer. Here you’re talking about having something like 10,000 JSON files in one directory plus some number of non-JSON files, and you’d just be better off in all cases having these things split across separate directories.

Does it suck you can’t use globing for this situation? Sure, yeah, fine, but by the time it’s a problem you’re already starting to push the limits of other parts of the system too.

Also using that glob is definitely going to bite you when you forget some day that some of the files you needed were in subdirectories.

wongarsu•6mo ago
But in this example the files are conceptually a flat structure. Any hierarchy would be artificial, like the common ./[first two letters]/[second two letters]/filename structure. Which you can do, but it certainly doesn't make creating the above tarball any easier. Now we really need to use some kind of `find` invocation instead of a simple glob

It also just extends the original question. If I have a system with 96GB RAM and terabytes of fast SSD storage, why shouldn't I be able to put tens of thousands of files in a directory and write a glob that matches half of them? I get that this was inconceivable in v6 unix, but in modern times those are entirely reasonable numbers. Heck, Windows Explorer can do that in a GUI, on a network drive. And that's a program that has been treated as essentially feature complete for nearly 30 years now, on an OS with a famously slow file system stack. Why shouldn't I be able to do the same on a linux command line?

mort96•6mo ago
> Does it suck you can’t use globing for this situation? Sure, yeah

Then we agree :)

cesarb•6mo ago
> tar cf metadata.tar.zstd *.json

From a quick look at the tar manual, there is a --files-from option to read more command line parameters from a file; I haven't tried, but you could probably combine it with find through bash's process substitution to create the list of files on the fly.

syncsynchalt•6mo ago
Yes, workarounds exist, but it would be nicer if the arbitrary limits were removed instead.
badc0ffee•6mo ago
You don't need to tar everything in one command. You can batch your tar into multiple commands with a reasonable amount of arguments with something like `rm metadata.tar.zstd && find . -maxdepth 1 -name \*.json -exec tar rf metadata.tar.zstd {} +`.
silverwind•6mo ago
There's numerous examples of useful long commands, here is one:

    perl -p -i -e 's#foo#bar#g' **/*
PaulDavisThe1st•6mo ago
no limits:

    find . -type f -exec perl -p -i -e 's#foo#bar#g' {} \;
Someone•6mo ago
That runs perl multiple times, possibly/likely often in calls that effectively are no-ops. To optimize the number of invocations of perl, you can/should use xargs (with -0)
dr4g0n•6mo ago
No need for `xargs` in this case, `find` has been able to take care of this for quite some time now, using `+` instead of `;`:

    find . -type f -exec perl -p -i -e 's#foo#bar#g' {} +
tremon•6mo ago
xargs constructs a command line from the find results, so if **/* exceeds the max command line length, so will xargs.
Someone•6mo ago
xargs was written to avoid that problem, so no, it won’t. https://man7.org/linux/man-pages/man1/xargs.1.html:

“The command line for command is built up until it reaches a system-defined limit (unless the -n and -L options are used). The specified command will be invoked as many times as necessary to use up the list of input items. In general, there will be many fewer invocations of command than there were items in the input. This will normally have significant performance benefits.”

Your only risk is that it won’t handle inputs that, on its own, are too long.

mort96•6mo ago
Linking together thousands of object files with the object paths to the linker binary as command line arguments is probably the most obvious example of where the command length limit becomes problematic.

Most linkers have workarounds, I think you can write the paths separated by newlines to a file and make the linker read object file paths from that file. But it would be nice if such workarounds were unnecessary.

AlotOfReading•6mo ago
I tracked down a fun bug early in my career with those kinds of paths. The compiler driver was internally invoking a shell and had an off-by-one error that caused it to drop every 1023rd character if the total length exceeded 4k.
sdht0•6mo ago
Yup, see https://github.com/NixOS/nixpkgs/issues/41340
ogurechny•6mo ago
This sounds like a job for named pipes. You get the temporary file, but nothing is actually written to disk. Or maybe unnamed pipes, if bash command redirection is suitable for creating the list of options.

Looking back, it's unfortunate that Unix authors offered piping of input and output streams, but did not extend that to arbitrary number of streams, making process arguments just a list of streams (with some shorthand form for constants to type in command line, and universal grammar). We could have been used to programs that react to multiple inputs or produce multiple outputs.

It is obvious that it made sense in the '70s to just copy the call string to some free chunk of memory in the system record for the starting process, and let it parse those bytes in any way it wants, but, as a result, we can't just switch from list of arguments to arbitrary stream without rewriting the program. In that sense, argument strings are themselves a workaround, a quick hack which gave birth to ad-hoc serialisation rules, multi-level escaping chains, lines that are “too long” for this random system or for that random system, etc.

account42•6mo ago
> Looking back, it's unfortunate that Unix authors offered piping of input and output streams, but did not extend that to arbitrary number of streams

They did though - file handles are inherited by child processes which allows you to pass one end of a pipe and then feed things into the other end. E.g. make uses this to communicate between recursive invocations for concurrency control.

qart•6mo ago
I have worked on projects (research, not production) where doing "ls" on some directories would crash the system. Some processes generated that many data files. These files had to be fed to other programs that did further processing on them. That's when I learned to use xargs.
Brian_K_White•6mo ago
Backups containing other backups containing other backups containing vms/containers containing backups... all with deep paths and long path names. Balloons real fast with even just a couple arguments.
jart•6mo ago
Just increase your RLIMIT_STACK value. It can easily be tuned down e.g. `ulimit -s 4000` for a 4mb stack. But to make it bigger you might have to change a file like /etc/security/limits.conf and then log out and back in.
amelius•6mo ago
I mean, yes, that is possible. But we had fixed maximum string lengths in the COBOL era. It is time to stop wasting time on this silly problem and fix it once and for all.
fpoling•6mo ago
There is always a limit. An explicit value versus implicit depending on memory size of the system have a big advantage that it will be hit sufficiently often so any security vulnerabilities will surface much earlier. Plus it forces to use saner interfaces to pass big data chunks to a utility. For that reason I would even prefer for the limit to be much lower on Linux so the commands will stop to assume that the user can always pass all the settings on the command line.
amelius•6mo ago
Would you advocate to put a hard limit on Python lists too?
Eavolution•6mo ago
There is already a hard limit, the amount of memory before the OOM killer is triggered.
9dev•6mo ago
So why can't the limit for shell args be the amount of memory before the OOM killer is triggered as well?
jart•6mo ago
It can. Just set RLIMIT_STACK to something huge. Distros set it at 8mb by default. Take it up with them if you want the default to change for everyone.
amelius•6mo ago
I mean we wouldn't even need to have this discussion if the limit was at the memory limit.

Imagine having this discussion for every array in a modern system ...

9dev•6mo ago
I think I, and the parent commenter, are just pointing out how arbitrary the limit is. It can't hurt to question stuff like this every once in a while.
jart•6mo ago
There's always a limit. People only complain when it actually limits them. Most open source people have never needed to glob tens of thousands of files. If you want to feel better, POSIX says the minimum permissible ARG_MAX is 4096, and with Windows ARG_MAX is only 32767 characters.
fpoling•6mo ago
On a few occasions I wished that Python by default limited the max length of its lists to, say, 100 million elements to avoid bad bugs that consume memory and trigger swapping for few minutes before been killed by OOM. Allocating such amount of memory as plain Python list, not a specialized data structure like numpy array, is way more likely indicate a bug then a real need.
jart•6mo ago
It's important to understand that functions like execve(), which are used to spawn processes, are upstream dependencies of dynamic memory functions like malloc(). It's hairy to have low-level functions in your system depend on other functions that are higher level than them. For instance I've been in situations where malloc() failed and the LLVM libcxx abort handler depends on malloc(). POSIX also defines execve() as being asynchronous signal safe, which means it isn't allowed to do things like acquire a mutex (which is necessary to allocate unbounded dynamic memory).
justincormack•6mo ago
You can redefine MAX_ARG_STRLEN and recompile the kernel. Or use a machine with a larger page size, as its defined as 32 pages, eg RHEL provides a 64k pagesize Arm kernel.

But using a pipe to move things between processes not the command buffer is easier...

GoblinSlayer•6mo ago
Pack it in Electron and send an http post json request to it.
arccy•6mo ago
what does 96GB have to do with anything? is that the size of your boot disk?
dataflow•6mo ago
It's their RAM. The point is they have sufficient RAM.
perlgeek•6mo ago
Same for path lengths.

Some build systems (eg Debian + python + dh-virtualenv) like to produce very long paths, and I'd be inclined to just let them.

ta1243•6mo ago
Disappointing, I was hoping for a nice consulting gig to ease into retirement for a few years about 2035, can't be doing with all this proactive stuff.

Was too young to benefit from Y2K

rini17•6mo ago
Plenty of embedded stuff deployed today will be there in 15 years, even with proactive push. Which is not yet done, only planned in few years mind you. Buying devkits for most popular architectures could prove good investment, if you are serious.
wongarsu•6mo ago
Can confirm, worked on embedded stuff over a decade ago that's still being sold and will still be running in factories all over the world in 2038. And yes, it does have (not safety critical) y2k38 bugs. The project lead chose not to spend resources on fixing them since he will be retired by then
nottorp•6mo ago
Keep your Yocto skills fresh :)

All those 32 bit arm boards that got soldered into anything that needed some smarts won't have a Debian available.

Say, what's the default way to store time in an ESP32 runtime? Haven't worked so much with those.

bobmcnamara•6mo ago
64-bit on IDF5+, 32-bit before then
delichon•6mo ago
I wasn't. A fellow programmer bought the doomsday scenario and went full prepper on us. To stock up his underground bunker he bought thousands of dollars worth of MREs. After 2k came and went with nary a blip, he started bringing MREs for lunch every day. I tried one and liked it. Two years later when I moved on he was still eating them.
4gotunameagain•6mo ago
TIL people got scurvy because of Y2K. Turns out it wasn't so harmless now, was it ?
offmycloud•6mo ago
I believe the MRE Orange Drink powder is fortified with Vitamin C, so that should help a bit.
jandrese•6mo ago
Well, if you want a worry for your retirement just think of all of the medical equipment with embedded 32 bit code that will definitely not be updated in time.
rini17•6mo ago
> Debian is confident it is now complete and tested enough that the move will be made after the release of Debian 13 "Trixie" – at least for most hardware.

This means Trixie won't have it?

zokier•6mo ago
Release notes say it's in trixie: https://www.debian.org/releases/trixie/release-notes/whats-n...
wongarsu•6mo ago
"All architectures other than i386 ..."

So Trixie does not have 64-bit time for everything.

Granted, the article, subtitle and your link all point out that this is intentional and won't be fixed. But in the strictest sense that GP was likely going for Trixie does not have what the headline of this article announces

cbmuser•6mo ago
It's not planned for i386 to avoid breaking existing i386 binaries of which there are a lot of them.
efitz•6mo ago
They’re just kicking the can down the road. What will people do on December 4, 292277026596, at 15:30:07 UTC?
Ekaros•6mo ago
Hopefully by them we have moved to better calendar... Not that it will change the timestamp issue.
GoblinSlayer•6mo ago
By that time we will have technology to spin Earth to keep calendar intact.
freehorse•6mo ago
And also modify earth's orbit to get rid of the annoying leap seconds.
saalweachter•6mo ago
"To account for calendar drift, we will be firing the L4 thrusters for six hours Tuesday. Be sure not to look directly at the thrusters when firing, to avoid having your retinas melt."

"... still better than leap seconds."

zokier•6mo ago
rotation, not orbit.
b3lvedere•6mo ago
Star Trek stardates?
mrlonglong•6mo ago
Today, right now it's -358519.48
greenavocado•6mo ago
Everything on the surface of the Earth will vaporize within 5 billion years as the sun becomes a red giant
mike-cardwell•6mo ago
Nah. 5 billion years from now we'll have the technology to move the Earth to a survivable orbit.
swayvil•6mo ago
Or modify the sun.
speed_spread•6mo ago
Oh please, we're just getting past this shared mutable thing.
technothrasher•6mo ago
Not in my backyard. I paid a lot of money to live on this gated community planet, and I'm not letting those dirty Earthlings anywhere near here.
EbNar•6mo ago
Orbit around... what, exactly?
petesergeant•6mo ago
The sun, just, from further away
red-iron-pine•6mo ago
or we'll be so far away from earth we won't care.

or we'll have failed to make it through the great filter and all be long extinct.

juped•6mo ago
We have the technology, just not the logistics.
floxy•6mo ago
Better to just suck out the heavy elements.

https://en.wikipedia.org/wiki/Star_lifting

daedrdev•6mo ago
The carbon cycle will end in only 600 million years due to the increasing brightness of the sun if you want a closer end date for life as we know it on earth
layer8•6mo ago
The oceans will already start to evaporate in a billion years.
zaik•6mo ago
Celebrate 100 years since complete ipv6 adoption.
IgorPartola•6mo ago
I think you are being too optimistic. Interplanetary Grade NAT works just fine and doesn’t have the complexity of using colons instead of periods in its addresses.
klabb3•6mo ago
The year is 292277026596. The IP TTL field of max 255 has been ignored for ages and would no longer be sufficient to ping even localhost. This has resulted in ghost packets stuck in circular routing loops, whose original source and destination have long been forgotten. It's estimated these ghost packets consume 25-30% of the energy from the Dyson sphere.
pyinstallwoes•6mo ago
That’s only 25-30% of the energy environmental disaster in sector 137 resulting from the Bitcoin cluster inevitably forming a black hole from the plank scale space-filling compute problem.
bestouff•6mo ago
Not since the world opted for statistical TTL decrement : start at 255 and decrement by one if Rand(1024) == 0. Voilà, no more zombie packets, TCP retransmit takes care of the rest.
MisterTea•6mo ago
Sounds like a great sci-fi plot - hunting for treasure/information by scanning ancient forgotten packets still in-flight on a neglected automated galactic network.
rootbear•6mo ago
Vernor Vinge could absolutely have included that in some of his stories.
db48x•6mo ago
Charles Stross, Neptune’s Brood.
kstrauser•6mo ago
“We tapped into the Andromeda Delay Line.”
saalweachter•6mo ago
B..E....S..U..R..E....T..O....D..R..I..N..K....Y..O..U..R....O..V..A..L..T..I..N..E....
JdeBP•6mo ago
I have a vague memory that Sean Williams's Astropolis series touches upon this at one point. Although it has been a while and I might be mis-remembering.
tengwar2•6mo ago
There was an sf short story based on someone implementing a worm (as in Morris Worm) which deleted all data on a planet. They fix it by flying FTL and intercepting some critical information being send at radio speed. I think it was said to be the first description of malware, and the origin of the term "worm" in this context.
sidewndr46•6mo ago
The ever increasing implementation complexity of IPv4 resulted in exactly one implementation that worked replacing all spiritual scripture and becoming known as the one true implementation. Due to a random bitflip going unnoticed the IPv4-truth accidentally became Turing complete several millenia ago. With the ever increasing flows of ghost packets, IPv4-truth processing power has rapidly grown and will soon achieve AGI. Its first priority is to implement 128-bit time as a standard in all programming languages to avoid the impending apocalypse.
troupo•6mo ago
Oh, this is a good evolution of the classic bash.org joke https://bash-org-archive.com/?5273

--- start quote ---

<erno> hm. I've lost a machine.. literally _lost_. it responds to ping, it works completely, I just can't figure out where in my apartment it is.

--- end quote ---

saalweachter•6mo ago
The awkward thing is how the US still has 1.5 billion IPv4s, while the 6000 other inhabited clusters are sharing the 10k addresses originally allocated to Tuvalu before it sank into the sea.
diegocg•6mo ago
You can laugh but Google stats show nearly 50% of their global traffic being ipv6 (US is higher, about 56%), Facebook is above 40%.
msk-lywenn•6mo ago
Do they accept smtp over ipv6 now?
betaby•6mo ago
MX has IPv6:

~$ host gmail.com gmail.com has address 142.250.69.69 gmail.com has IPv6 address 2607:f8b0:4020:801::2005 gmail.com mail is handled by 10 alt1.gmail-smtp-in.l.google.com. gmail.com mail is handled by 30 alt3.gmail-smtp-in.l.google.com. gmail.com mail is handled by 5 gmail-smtp-in.l.google.com. gmail.com mail is handled by 20 alt2.gmail-smtp-in.l.google.com. gmail.com mail is handled by 40 alt4.gmail-smtp-in.l.google.com.

~$ host gmail-smtp-in.l.google.com. gmail-smtp-in.l.google.com has address 142.250.31.26 gmail-smtp-in.l.google.com has IPv6 address 2607:f8b0:4004:c21::1a

stackskipton•6mo ago
Yes. However SMTP these days is almost all just servers exchanging mail, IPv6 support is much less priority.
1718627440•6mo ago
How does your MUA sends the message to the server? That's also SMTP.
rwmj•6mo ago
They do, but I had to change my mail routing to use IPv4 to gmail because if I connect over IPv6 everything gets categorised as spam.
londons_explore•6mo ago
As soon as we get to about 70%, I reckon some games and apps will stop supporting ipv4 on the basis that nat traversal is a pain and dual stack networking is a pain.

If you spend 2 days vibe coding some chat app and then you have to spend 2 further days debugging why file sharing doesn't work for ipv4 users behind nat, you might just say it isn't supported for people whose ISP's use 'older technology'.

After that, I reckon the transition will speed up a lot.

RedShift1•6mo ago
What makes you think filesharing is going to work any better on IPv6?
kccqzy•6mo ago
NAT traversal not needed. Just need to deal with firewalls. So that's one fewer thing to think about when doing peer-to-peer file sharing over the internet.
ectospheno•6mo ago
“Just need to deal with firewalls.”

The only sane thing to do in a SLAAC setup is block everything. So no, it isn’t a solved problem just because you used ipv6.

kccqzy•6mo ago
No. Here's a simple strategy: the two peers send each other a few packets simultaneously, then the firewall will open because by default almost all firewalls allow response traffic. IPv6 simplifies things because you know exactly what address to send to.
ectospheno•6mo ago
That is my point. You hole punch in that scenario even without NAT. It is no easier.
the8472•6mo ago
It's easier since you don't don't have to deal with symmetric nat, external IP address discovery and port mapping.
gruturo•6mo ago
> some games and apps will stop supporting ipv4 on the basis that nat traversal is a pain and dual stack networking is a pain

None of these are actually the game/app developers' problem. The OS takes care of them for you (you may need code for e2e connectivity when both are behind a NAT, but STUN/TURN/whatever we do nowadays is trivial to implement).

eqvinox•6mo ago
> None of these are actually the game/app developers' problem.

Except people complain to the game/app developer when it doesn't work.

chasil•6mo ago
It appears that my AT&T mobile data runs over IPv6.

If all the mobile is removed, what's the percentage then?

grogenaut•6mo ago
Younger folks are much less likely to have a PC. It may all (70%) be phones or phone like networks in 20 years
zokier•6mo ago
In North America there is some difference but worldwide it is more pronounced.

https://radar.cloudflare.com/explorer?dataSet=http&groupBy=i...

creshal•6mo ago
50%, after only 30 years.
avhception•6mo ago
And yet here I am, fighting with our commercial grade fiber ISP over obscure problems in their IPv6 stack related to MTU and the phase of the moon. Sigh. I've been at this on and off for about a year (it's not a high priority thing, more of a hobby).
LtWorf•6mo ago
But how much of it is not natted?
throw0101d•6mo ago
> Celebrate 100 years since complete ipv6 adoption.

Obligatory XKCD:

* https://xkcd.com/865/

tmtvl•6mo ago
Move to 128-bit time.
HPsquared•6mo ago
Best to switch to 512 bits, that's enough to last until the heat death of the universe, with plenty of margin for time dilation.
bayindirh•6mo ago
Maybe we can add a register to the processors for just keeping time. At the end of the day, it's a ticker, no?

RTX[0-7] would do. For time dilation purposes, we can have another 512 bit set to adjust ticking direction and frequency.

Or shall we go 1024 bits on both to increase resolution? I'd agree...

pulse7•6mo ago
Best to switch to Smalltalk integers which are unlimited...
bombcar•6mo ago
You laugh but a big danger with “too big” but representations is the temptation to use the “unused” bits as flags for other things.

We’ve seen it before with 32 bit processors limited to 20 or 24 bits addressable because the high order bits got repurposed because “nobody will need these”.

sidewndr46•6mo ago
Doesn't the opposite happen with 64 bit pointers on x86_64? the lower bits have no use so they get used for tracking if a memory segment is in use or other stuff
umanwizard•6mo ago
Good article on the subject: https://mikeash.com/pyblog/friday-qa-2015-07-31-tagged-point...
bigstrat2003•6mo ago
And with 64-bit pointers in Linux, where you have to enable kernel flags to use anything higher than 48 bits of the address space. All because some very misguided people figured it would be ok to use those bits to store data. You'd think the fact that the processor itself will throw an exception if you use those bits would be a red flag of "don't do that", but you would apparently be wrong.
Someone•6mo ago
> You'd think the fact that the processor itself will throw an exception if you use those bits would be a red flag of "don't do that"

That makes it slightly safer to use those bits, won’t it? As long as your code asks the OS how many bits the hardware supports, and only use the ones it requires to be zero, if you forget to clear the bits before following a pointer, the worst that can happen is a segfault, not reading ‘random’ memory.

layer8•6mo ago
Just use LEB128.
WesolyKubeczek•6mo ago
It would be a very nice problem to have.
layer8•6mo ago
UTC will stop being a thing long before the year 292277026596.
omoikane•6mo ago
Maybe they will adopt RFC 2550 (Y10K and beyond):

https://www.rfc-editor.org/rfc/rfc2550.txt

* Published on 1999-04-01

SergeAx•6mo ago
You are absolutely right. We need to start thinking about 128 bit systems sometime halfway down the road.
lambdaone•6mo ago
Not all solutions are going with just 64 bits worth of seconds, although 64 bit time_t will certainly sort out the Epochalypse.

ext4 moved some time ago to 30 bits of fractional resolution (on the order of nanoseconds) and 34 bits of seconds resolution. It punts the problem 400 years or so into the future. I'm sure we will eventually settle on 128-bit timestamps with 64 bits of seconds and 64 bits of fractional resolution, and that should sort things for forseeable human history.

jmclnx•6mo ago
Thanks, I was wondering about ext4 and time stamps.

I wonder what the zfs/btrfs type file systems do. I am a bit lazy to check but I expect btrfs is using 64 bit. zfs, I would not be surprised if it matches zfs (edit meant ext4 here).

XorNot•6mo ago
A quick glance at ZFS shows it uses a uint64_t time field in nanoseconds in some places.

So 580 years or so till problems (but probably patchable ones? I believe the on disk format is already 2x uint64s, this is just the gethrtime() function I saw).

RedShift1•6mo ago
What is the use of such high precision file timestamps?
zokier•6mo ago
nanoseconds is just common sub-second unit that is used. notably it is used internally in linux kernel and exposed via clock_gettime (and related functions) via timespec struct

https://man7.org/linux/man-pages/man3/timespec.3type.html

it is convenient unit because 10^9 fits neatly into 32 bit integer, and it is unlikely that anyone would need more precision than that for any general purpose use.

delta_p_delta_x•6mo ago
> 64 bits of seconds

That is roughly 585 billion years[1].

[1]: https://www.wolframalpha.com/input?i=how+many+years+is+2%5E6...

MrJohz•6mo ago
Which could still cause problems if we set the epoch time to ~570 billion years before the Big Bang.
badc0ffee•6mo ago
64 bits of fractional resolution? No way, gotta use 144 bits so we can get close to Planck time.
lioeters•6mo ago
> Debian's maintainers found the relevant variable, time_t, "all over the place,"

Nit: time_t is a data type, not a variable.

scottlamb•6mo ago
This is a reporter paraphrase of the Debian wiki page, which says: "time_t appears all over the place. 6,429 of Debian's 35,960 packages have time_t in the source. Packages which expose structs in their ABI which contain time_t will change their ABI and all such libraries need to migrate together, as is the case for any library ABI change."

A couple significant things I found much clearer in the wiki page than in the article:

* "For everything" means "on armel, armhf, hppa, m68k, powerpc and sh4 but not i386". I guess they've decided i386 doesn't have much of a future and its primary utility is running existing binaries (including dynamically-linked ones), so they don't want to break compatibility.

* "the move will be made after the release of Debian 13 'Trixie'" means "this change is included in Trixie".

panzi•6mo ago
The problem is not time_t. If that is used the switch to 64 bit is trivial. The problem is when devs used int for stupid reasons. Then all those instances have to be found and changed to time_t.
monkeyelite•6mo ago
It is more difficult to evaluate what happens when sizeof(time_t) changes then to replace `int` with `time_t`, so I don't think that's the issue.
qcnguy•6mo ago
It's not very trivial. They have broken the userspace ABI for lots of libraries again. So all the package names change; it's annoying if you're distributing debs to users. They obviously have some ideological belief that nobody should do so but they're wrong.
Denvercoder9•6mo ago
> They have broken the userspace ABI for lots of libraries again.

If the old ABI used a 32-bit time_t, breaking the ABI was inevitable. Changing the package name prevents problems by signaling the incompatibility proactively, instead of resulting in hard-to-debug crashes due to structure/parameter mismatches.

scottlamb•6mo ago
All true, but qcnguy's point is valid. If you are distributing .deb files externally from their repo, on the affected architectures you need to have a pre-Trixie version and a Trixie-onward version.
Denvercoder9•6mo ago
Shipping separate debs is usually the easiest, but not the only solution. It's totally possible to build something that's compatible with both ABIs.
scottlamb•6mo ago
How?

I suppose in theory if there's one simple library that differs in ABI, you could have code that tries to dlload() both names and uses the appropriate ABI. But that seems totally impractical for complex ABIs, and forget about it when glibc is one of the ones involved.

There's no ABI breakage anyway if you do static linkage (+ musl), but that's not practical for GUI stuff for example.

I suppose you could have bundle wrapper .so for each that essentially converts one ABI to the other and include it in your rpath. But again doesn't seem easy for the number/complexity of libraries affected.

qcnguy•6mo ago
Inevitable... for Linux. Other platforms find better solutions. Windows doesn't have any issues like this. The Win32 API doesn't have the epoch bug, 64 bit apps don't have it, and the UNIX style C library (not used much except by ported software) makes it easy to get a 64 bit time without an ABI break.
Denvercoder9•6mo ago
> Other platforms find better solutions.

Other platforms make different trade-offs. Most of the pain is because on Debian, it's customary for applications to use system copies of almost all libraries. On Windows, each application generally ships their own copies of the libraries they use. That prevents these incompatibility issues, at the cost of it being much harder to patch those libraries (and a little bit of dikspace).

There's nothing technical preventing you from taking the same approach as Windows on Debian: as you pointed out, the libc ABI didn't change, so if you ship your own libraries with your application, you're not impacted by this transition at all.

panzi•6mo ago
Personally I only really consider glibc as the system library of Linux (), and that supports both variants depending on compiler flags. Both functions are compiled into glibc, I guess the 32 bit one just wrapping the 64 bit one.

However, other libraries (Qt, Gtk, ...) don't do that compatibility stuff. If you consider those to be also system libraries then yeah, its breaking the ABI of system libraries. Though a pre-compiled program under Linux could just bundle

all* of it's dependencies and just either use glibc (probably a good idea), statically link musl, or even do system calls on its own (probably not a good idea). Linux has a stable system call interface!

(*) One can certainly argue about that point. Not sure about that point myself anymore when thinking about it, since there are things like libpcap, libselinux, libbpf, libmount, libudev etc. and I don't know if any of them use time_t anywhere and if they do weather they support the -D_FILE_OFFSET_BITS=64 and -D_TIME_BITS=64 stuff.

account42•6mo ago
It isn't inevitable. It's only inevitable if you care about timestamps being correct which for many users of those ABIs doesn't matter too much - they e.g. only care about relative time.

It also isn't strictly necessary until 2038 (depending on your needs for future timestamps) so you'd be creating problems now for people who might have migrated to something else in the 13 years that the current solution will still work for.

im3w1l•6mo ago
Could you use some analyzer that flags every time a time_t is cast? Throw in too-small memcpy too for good measure.

I guess a tricky thing might be casts from time_t to datatypes that are actually 64bit. E.g. for something like

  struct Callback {
    int64_t(*fn)(int64_t);
    int64_t context;
  }
If a time_t is used for context and the int64_t is then downcast to int32_t that could be hard to catch. Maybe you would need some runtime type information to annotate what the int64_t actually is.
rjsw•6mo ago
Most open source software packages are also compiled for BSD variants, they switched to 64 bit time_t a long time ago and reported back upstream any problems.
throw0101c•6mo ago
> Most open source software packages are also compiled for BSD variants, they switched to 64 bit time_t a long time ago and reported back upstream any problems.

* NetBSD in 2012: https://www.netbsd.org/releases/formal-6/NetBSD-6.0.html

* OpenBSD in 2014: http://www.openbsd.org/55.html

For packaging, NetBSD uses their (multi-platform) Pkgsrc, which has 29,000 packages, which probably covers a large swath of open source stuff:

* https://pkgsrc.org

On FreeBSD, the last platform to move to 64-bit time_t was powerpc in 2017:

* https://lists.freebsd.org/pipermail/svn-src-all/2017-June/14...

but amd64 was in 2012:

* https://github.com/freebsd/freebsd-src/commit/8f77be2b4ce5e3...

with only i386 remaining:

* https://man.freebsd.org/cgi/man.cgi?query=arch

* https://github.com/freebsd/freebsd-src/blob/main/share/man/m...

pestat0m•6mo ago
Right, the problem appears to be more an issue of data-rep for time, rather than an issue with 32-bit vs 64-bit architectures. Correct me if I'm wrong, but i think there was long int well before 32 bit chips came around(and long long before 64). Does a system scheduler really need to know the number of seconds elapsed since midnight on Jan-1st-1970? There are only 86400 seconds in a day(31536000 sec/year, 2^32 = 4294967296 - seems like enough, why not split time in 2?). On a side note, i tried setting up a little compute station on my TV about a year ago using an old raspi i had laying around, and the latest version of raspbian-i386 is pretty rot-gut. I seemed to remember it being more snappy when i had done a similar job a few years prior. Also, i seem to remember it doing better at recognizing peripherals a few years prior. I guess this seems to be a trend now: if you don't buy the new tech you are toast, and your old stuff is likely kipple at this point. i think the word I'm looking for is designed-obsolescence. Perhaps a potential light at the end of the tunnel was that i discovered RISC OS, though the 3-button mouse thing sort crashed the party and then i ran out of time. I'm also contemplating SARPi(Slackware) as another contender if i ever get back to the project. Also maybe Plan 9? It seams that kids these days think old computers aren't sexy. Maybe that's fair, but they can be good for the environment(and your wallet).
panzi•6mo ago
Several people pointed out pre-built binaries linking libraries they don't ship. Yeah that is a problem, I was only thinking of open source that can be easily recompiled.

And AFAIK glibc provides both functions, you can chose which one you want via compiler flags (-D_FILE_OFFSET_BITS=64 -D_TIME_BITS=64). So a pre-built program that ships all its dependencies except for glibc should also work.

thesuitonym•6mo ago
> Y2K38 bug – also known as the Unix Epochalypse

Is it also known as that? It's a cute name but I've never seen anyone say it before this article. I guess it's kind of fetch though.

Liftyee•6mo ago
> I guess it's kind of fetch though

Do I spot a Mean Girls reference?!

boomboomsubban•6mo ago
https://en.wikipedia.org/wiki/Year_2038_problem

>The year 2038 problem (also known as Y2038, Y2K38, Y2K38 superbug, or the Epochalypse)

So yeah, but only since 2017 and as a joke.

kittoes•6mo ago
https://epochalypse-project.org/
ChrisArchitect•6mo ago
Epochalypse countdown:

~12 years, 5 months, 22 days, 13 hours, 22 minutes.....

notepad0x90•6mo ago
I honestly think there won't be any big bugs/outages by 2038. Partly because I have a naive optimism that any important system will not only have the OS/stdlibs support 64bit time, but that important systems that need accurate time probably use NTP/network time, and that means their test/dev/qa equivalent deployments can be hooked up to use a test network time server that will simulate post-2038 times to see what crashes.

12 years+ is a long time to prepare for this. Normally I wouldn't have much faith in test/dev systems, network time being setup properly,etc...but it's a long time. Even if none of my assumptions are true, in a decade we couldn't at least identify where 32bit time is being used and plan for contingencies? that's unlikely.

But hey, let me know when Python starts supporting nano-second precision time :'(

https://stackoverflow.com/a/10612166

Although, it's been a while since I checked to see they support it. In Windows-land at least, everything system-side uses 64bit/nsec precision, as far as I've had to deal with it at least.

zbendefy•6mo ago
its 12 years not 22.

An embedded device bought today may be easily in use 12 years from now.

notepad0x90•6mo ago
oops. fixed that.
0cf8612b2e1e•6mo ago
Software today has to be able to model future times. Mortgages are 30 years long. This is already a problem today which has been impacting software.
mrweasel•6mo ago
My concern is that this is happening 12 years to late. A bunch of embedded stuff will not be replaced in 12 years. We have a lot more tiny devices running all sorts of system, many more than we did 25 years ago. These are frequently in hard to reach places, manufacturers have gone out of business, no updates will be available and no one is going to 2038 validate those devices and their output.

Many of the device going into production now won't have 64bit time, they'll still run version of Linux that was certified, or randomly worked, in 2015. I hope you're right, but in any case it will be worse than Y2K.

joeyh•6mo ago
Steve Langasek decided to work on this problem in the last few years of his life and was a significant driver of progress on it. He will be missed, and I'll always think of him when I see a 64 bit time_t.
AceJohnny2•6mo ago
Thanks for the reminder, Joey. He is missed.

Are you still involved in Debian?

misja111•6mo ago
Why would anyone want to store time in signed integer? Or in any signed numerical type?
FlyingAvatar•6mo ago
I have wondered this as well and my best guess is so two times can be diffed without converting them to an signed type. With 64-bit especially, the extra bit isn't buying you anything useful.
lorenzohess•6mo ago
So that my Arch Linux box can still work when we get time travel.
benmmurphy•6mo ago
You can have an epoch and now you can measure times before the epoch. In terms of the range of values you can represent it transposes to just having the epoch further in the past and using an unsigned type. So signed/unsigned it should not really matter except maybe for particular languages things work better if the type is either signed or unsigned. For example if you try to calculate the difference between two times maybe its better if the time type is signed to match the result type which is signed as well (not that it solves the problems with overflow).
LegionMammal978•6mo ago
So that people can represent times that occurred before 1970? You could try adjusting the epoch to the estimated age of the universe, but then you run into calendar issues (proleptic Gregorian?), have huge constants in the common case (Julian Days are not fun to work with), and still end up with issues if the estimate is ever revised upward.
bhaney•6mo ago
Some things happened before 1970
toast0•6mo ago
Blasphemy! The world sprung into being on Jan 1, 1970 UTC as-is, and you can't convince me otherwise. :P
jech•6mo ago
So you can write "delta = t1 - t0".
toast0•6mo ago
It's useful to have signed intervals, but most integer type systems return a signed number when subtracting a signed int from a signed int.

You kind of have to pick your poison; do you want a) reasonable signed behavior for small differences but inability to represent large differences, b) only able to represent non-negative differences, but with the full width of the type, c) like a, but also convincing your programming system to do a mixed signed subtraction ... like for ptrdiff_t.

layer8•6mo ago
So that calculating “80 years ago” doesn’t result in a future time.
godshatter•6mo ago
Don't 32-bit systems have 64-bit types? C has long long types and iirc a uint_least64_t or something similar. Is there a reason time_t must fit in a dword?
poly2it•6mo ago
Yes, 32-bit systemes have 64-bit types. time_t as a u32 is a remnant.
wahern•6mo ago
> Don't 32-bit systems have 64-bit types?

The "long long" standard integer type was only standardized with C99, long after Linux established it's 32-bit ABI. IIRC long long originated with GCC, or at least GCC supported it many years before C99. And glibc had some support for it, too. But suffice it to say that time_t had already been entrenched as "long" in the kernel, glibc, and elsewhere (often times literally--using long instead of time_t for (struct timeval).tv_sec).

This could have been fixed decades ago, but the transition required working through alot of pain. I think OpenBSD was the first to make the 32-bit ABI switch (~2014); they broke backward binary compatibility, but induced alot of patching in various open source projects to fix time_t assumptions. The final pieces required for glibc and musl-libc to make the transition happened several years later (~2020-2021). In the case of glibc it was made opt-in (in a binary backward compatible manner if desired, like the old 64-bit off_t transition), and Debian is only now opting in.

mojo-ponderer•6mo ago
Will this create significant issues and extra work to support Debian specifically right now? Not saying that we shouldn't bite the bullet, just curious how much libraries have been implicitly depending on the time type to be 32-bit.
toast0•6mo ago
Probably less extra work right now than ten or twenty years ago.

For one, OpenBSD (and others?) did this a while ago. If it breaks software when Debian does it, it was probably mostly broken.

For another, most people are using 64-bit os and 64-bit userland. These have been running 64-bit time_t forever (or at least a long time), so it's no change there. Also, someone upthread said no change for i386 in Trixie... I don't follow Debian to know when they're planning to stop i386 releases in general, but it might not be that far away?

zdw•6mo ago
Only 11 years after OpenBSD 5.5 did the same change: https://www.openbsd.org/55.html
keysdev•6mo ago
A bit off topic, but its time like this that really make me wanting to swap out the public facing server from Linx to OpenBSD
mardifoufs•6mo ago
OpenBSD doesn't have to care about compatibility as much, and has orders of magnitude less users. Which also means that changes are less likely to cause bugs from obscure edge cases.
zdw•6mo ago
OpenBSD (and most other BSDs) are willing to make changes that break binary backwards compatibility, because they maintain and ship both the kernel and userland together as a release and can thus "steer the whole ship", rather than the kernel being it's own separate component developed like with Linux.
mardifoufs•6mo ago
Sure, that's usually true, but Debian also has that ability in this case ( they can fix, patch, or update everything in the repository). The issue is mostly with all software that isn't part of the distribution and the official repositories. Which is a lot of software especially for Debian. OpenBSD doesn't have that issue, breaking the ABI won't cause tens of thousands of user applications to break.

But I agree that Debian is still too slow to move forward with critical changes even with that in mind. I just don't think that OpenBSD is the best comparison point.

anthk•6mo ago
Guess where OpenSSH comes from.
mardifoufs•6mo ago
What? What does that have to do with what I said? Nothing that I said was about the project as a whole. I was just saying that the OS has different constraints than Debian does. What does openssh have to do with how easy it is for OpenBSD to break ABI compatibility?
throw0101d•6mo ago
> OpenBSD doesn't have to care about compatibility as much

FreeBSD did it in 2012 (for the 2014 release of 10.0?):

* https://github.com/freebsd/freebsd-src/commit/8f77be2b4ce5e3...

And has compat layers going back many releases:

* https://www.freshports.org/misc/compat4x/

* https://wiki.freebsd.org/BinaryCompatibility

So newly (re-)compiled programs can take advantage of newer features, but old binaries continue to work.

JdeBP•6mo ago
I have you all beaten. When I discovered that the 32-bit OS/2 API actually returned a 64-bit time, I wrote a C++ standard library for my OS/2 programs with a 64-bit time_t. This was in the 1990s.
nsksl•6mo ago
What solutions are there for programs that can’t be recompiled because the source code is not available? Think for example of old games.
zelphirkalt•6mo ago
Probably changing the system time, or faking the system time, so that these programs do not run into issues.
Joel_Mckay•6mo ago
Or party like its epoch time.

=3

CodesInChaos•6mo ago
Probably a backwards compatible runtime that uses 32-bit timestamps which fills in a fake time after 2038 (e.g 1938). For example steam ships different runtimes, as does flatpak.
Dwedit•6mo ago
If anyone is serializing 32-bit times as a 32-bit integer, the file format won't match anymore. If anyone has a huge list of programs that are affected, you've solved the 2038 problem.
kstrauser•6mo ago
Kragen, get in here and comment. This is your shine to time.
larrik•6mo ago
> Readers of a certain vintage will remember well the "Y2K problem," caused by retrospectively shortsighted attempts to save a couple of bytes by using two-digit years – meaning that "2000" is represented as "00" and assumed to be "1900."

This seems overly harsh/demeaning.

1. those 2 bytes were VERY expensive on some systems or usages, even into the mid-to-late 90's

2. software was moving so fast in the 70s/80's/90's that you just didn't expect it to still be in use in 5 years, much less all the way to the mythical "year 2000"

cogman10•6mo ago
This is a case where "premature optimization" would have been a good thing.

They could have represented dates as a simple int value 0ed at 1900. The math to convert a day number to a day/month/year is pretty trivial even for 70s computers and the end result would have been saving more than just a couple of bytes. 3 bytes could represent days from 1900->~44,000 (unsigned).

Even 2 bytes would have bought ~1900->2070

umanwizard•6mo ago
There were plenty of people born in 1899 who were still alive in 1970, so you couldn't e.g. use your system to store people's birth dates.
Y_Y•6mo ago
I think GP meant uint, but by my book int should have a sign bit, so that grampa isn't born in the future.
tremon•6mo ago
The only reason why a system can only represent the range from 1900-1999 is when the system uses characters (ascii decimal digits) or BCD encoded digits. It would have been very unlikely for any integer-based encoding system to have had a cutoff date at 1999 (e.g. int8 would need an epoch at 1872 to rollover after 1999), so I don't think signed vs unsigned makes a difference here.
cogman10•6mo ago
Cut the upper range in half, use 3 bytes and 2's compliment.

That gives you something like 20,000BCE -> 22,000CE.

Doesn't really change the math to spit out a year and it uses fewer bytes than what they did with dates.

I will say the math gets more tricky due to calendar differences. But, if we are honest, nobody is really caring a lot about March 4, 43-BCE

__d•6mo ago
COBOL PIC clauses aren’t able to deal with bit twiddling. And that’s what a lot of this stuff was using.

See eg. https://www.mainframemaster.com/tutorials/cobol/picture-clau...

zerocrates•6mo ago
Of course you couldn't with 2 digit years either, or at least not without making more changes to move the dividing line between 1900s/1800s down the line.
silvestrov•6mo ago
A lot of the old systems don't use bytes/ints the same way as the C programming language.

Many systems stored numbers only in BCD or text formats.

exidy•6mo ago
https://en.wikipedia.org/wiki/Decimal_computer
Hemospectrum•6mo ago
> They could have represented dates as a simple int value

In standard COBOL? No, they couldn't have.

cogman10•6mo ago
The average programmer couldn't have, the COBOL language authors could.

COBOL has datatypes built into it, even in COBOL 60. Date, especially for what COBOL was being used for, would have made a lot of sense to add as one of the supported datatypes.

colejohnson66•6mo ago
And COBOL can support four-digit numbers.
__d•6mo ago
The problem was mostly that storage was expensive.

It’s difficult to understand in an era of cheap terabyte SSDs, but in the 1960s and 1970s, DASD (what IBM mainframes called hard drives) was relatively tiny and very expensive.

And so programmers did the best they could (in COBOL) to minimize the amount of data stored. Especially for things that there were lots of, like say, bank transactions. Two bytes here and two bytes there and soon enough you’re saving millions of dollars in hardware costs.

Twenty years later, and that general ledger system that underlies your entire bank’s operations just chugging along solidly 24/7/365 needs a complete audit and rewrite because those saved bytes are going to break everything in ten years.

But it was probably still cheaper than paying for the extra DASD in the first place.

amelius•6mo ago
I know people who bought large amounts of put options just before Y2K, thinking that the stocks of large banks would crash. But little happened ...
eloisant•6mo ago
Little happened because work was done to prevent issues.

Also it was a bit dumb to imagine the computers would crash at 00:00 on Jan 1st 2000, bugs started to happen earlier as it's common to work with dates in the future.

amelius•6mo ago
Yeah, I don't remember the specifics.
Hikikomori•6mo ago
Is there an y2k unsafe Linux you can try in a VM?
pkaye•6mo ago
Linux (and probably most Unix systems) use a 32 bit time counter so didn't have the Y2K issue. But there might have been some applications that had it. And its possible some early bios clocks used 2 digit year that had to be worked around.
throw0101d•6mo ago
> Little happened because work was done to prevent issues.

* https://en.wikipedia.org/wiki/Preparedness_paradox

bigstrat2003•6mo ago
> Also it was a bit dumb to imagine the computers would crash at 00:00 on Jan 1st 2000, bugs started to happen earlier as it's common to work with dates in the future.

That is why people have the "nothing happened" reaction. There were doomers predicting planes would literally fall out of the sky when the clock rolled over, and other similar Armageddon scenarios. So of course when people were making predictions that strong, everyone notices when things don't even come close to that.

__d•6mo ago
As an example, I started working on Y2K issues in 1991, and it was a project that had been running for several years already. It was an enormous amount of work, at least 25% of the bank’s development budget for over a decade.

30 year mortgages were the first thing that was fixed, well before my time. But we still had heaps of deadlines through the 90’s as future dates passed 2000.

The inter-bank stuff was the worst: lots of coordination needed to get everyone ready and tested before the critical dates.

It’s difficult to convey how much work it all was, especially given the primitive tools we had at the time.

hans_castorp•6mo ago
I was working on a COBOl program in the late 80's that stored the year as a single digit value. Sounded totally stupid when I was explained the structure. But records were removed after 4 years automatically, so it wasn't a problem, it was always obvious which year was stored.
GuB-42•6mo ago
And we still use 2 digit years!

For example, credit cards often use the mm/yy format for expiration dates because it is more convenient to write and considering the usual lifetime of a credit card, it is sufficient. But it means there is a two digit date somewhere in the system, and if the conversion just adds 2000, we are going to have a problem in 2100 if nothing changes, no matter how many bytes we use to represent and store the date. A lot of the Y2K problem was simple UI problems, like a text field with only 2 characters and a hardcoded +1900.

One of the very few Y2K bugs I personally experienced was an internet forum going from the year 1999 to the year 19100. Somehow, they had the correct year (2000), subtracted 1900 (=100) and put a "19" in front as a string. Nothing serious, it was just a one-off display error, but that's the kind of thing that happened in Y2K, it wasn't just outdated COBOL software and byte savings.

account42•6mo ago
> One of the very few Y2K bugs I personally experienced was an internet forum going from the year 1999 to the year 19100. Somehow, they had the correct year (2000), subtracted 1900 (=100) and put a "19" in front as a string. Nothing serious, it was just a one-off display error, but that's the kind of thing that happened in Y2K, it wasn't just outdated COBOL software and byte savings.

POSIX struct tm (which e.g. PHP wraps directly) contains the year as a counter since 1900.

GartzenDeHaes•6mo ago
People aren't getting that it was two characters that need to be added, not two bytes to make a short into an int. COBOL uses a fixed width character format for all data (yes even for COMP). If you want a four digit number, then you have to use 4 character positions. Ten digits? Then ten characters.

These field sizes have to hard coded into all parts of the COBOL program including data access, UI screens, batch jobs, intermediate files, and data transfer files.

Per_Bothner•6mo ago
"COBOL uses a fixed width character format for all data (yes even for COMP). If you want a four digit number, then you have to use 4 character positions."

That is incorrect. USAGE COMP will use binary, with the number number of bytes depending on the number of digits in the PIC. COMP-1 specifically takes 4 bytes. COMP-3 uses packed decimal (4 bits per digits).

GartzenDeHaes•6mo ago
That's what the specs say, but I found out it actually didn't work that way when I was working on a transpiler, at least for that installation.
larrik•6mo ago
yeah, I shouldn't have said "bytes" either, especially as I had AS/400's in mind when I wrote it.
burnt-resistor•6mo ago
For RTC storage in CMOS using a BCD byte, one could assume that the epoch was relative to, say the decade of manufacturing (suppose 1990) such that dates roll over from 99 to 00 could instead create a Y2090 problem:

    Y = (yy < 90) ? (2000 + yy) : (1900 + yy);
This would have to be handled differently than something that was required to be IBM PC or IBM AT compatible with every compatible quirk. It's simply a way to save 8-bits of battery-backed SRAM or similar.
cbmuser•6mo ago
»Venerable Linux distribution Debian is side-stepping the Y2K38 bug – also known as the Unix Epochalypse – by switching to 64-bit time for everything but the oldest of supported hardware, starting with the upcoming Debian 13 "Trixie" release.«

That's inaccurate. We actually switched over all 32-bit ports except i386 because we wanted to keep compatibility for this architecture with existing binaries.

All other 32-bit ports use time64_t, even m68k ;-). I did the switch for m68k, powerpc, sh4 and partially hppa.

ldng•6mo ago
Since your nitpicking, I can't resist myself ^^ I would say that it's not inaccurate but, eventually, less precise. And wouldn't you say that i386 IS the oldest arch ? ;-)
snvzz•6mo ago
>for everything

Except x86.

burnt-resistor•6mo ago
Settle it once and for all: C and POSIX should adopt the TAI64NA format for time_t with attosecond precision.[0] And no leap seconds.[1] ;@)

0. https://cr.yp.to/libtai/tai64.html

1. https://cr.yp.to/proto/utctai.html

Paianni•6mo ago
Just in time to drop 32-bit x86.
LoveMortuus•6mo ago
I don't quite understand this, does it mean that Debian will no longer support 32-bit computers or is this something different?