frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Tao on "blue team" vs. "red team" LLMs

https://mathstodon.xyz/@tao/114915604830689046
268•qsort•4h ago•92 comments

Copyparty, turn almost any device into a file server

https://github.com/9001/copyparty
273•saint11•4h ago•48 comments

Claude Code new limits – Important updates to your Max account usage limits

70•ivanvas•38m ago•35 comments

Interstellar Comet 3I/Atlas: What We Know Now

https://skyandtelescope.org/astronomy-news/interstellar-comet-3i-atlas-what-we-know-now/
21•bikenaga•1h ago•6 comments

Visa and Mastercard are getting overwhelmed by gamer fury over censorship

https://www.polygon.com/news/616835/visa-mastercard-steam-itchio-campaign-adult-games
174•mrzool•1h ago•132 comments

GLM-4.5: Reasoning, Coding, and Agentic Abililties

https://z.ai/blog/glm-4.5
90•GaggiX•5h ago•47 comments

Different Clocks

https://ianto-cannon.github.io/clock.html
5•pppone•10m ago•0 comments

Simplify, then add delightness: On designing for children

https://shaneosullivan.wordpress.com/2025/07/28/on-designing-for-children/
60•shaneos•3h ago•20 comments

Cells that breathe oxygen and sulfur at the same time

https://www.quantamagazine.org/the-cells-that-breathe-two-ways-20250723/
19•sohkamyung•3d ago•0 comments

LLM Embeddings Explained: A Visual and Intuitive Guide

https://huggingface.co/spaces/hesamation/primer-llm-embedding
345•eric-burel•12h ago•67 comments

VPN use surges in UK as new online safety rules kick in

https://www.ft.com/content/356674b0-9f1d-4f95-b1d5-f27570379a9b
483•mmarian•16h ago•721 comments

The first 100% effective HIV prevention drug is approved and going global

https://newatlas.com/infectious-diseases/hiv-prevention-fda-lenacapavir/
191•MBCook•4h ago•48 comments

The Geological Sublime

https://harpers.org/archive/2025/07/the-geological-sublime-lewis-hyde-deep-time/
62•prismatic•6h ago•17 comments

AI Companion Piece

https://thezvi.substack.com/p/ai-companion-piece
53•jsnider3•5h ago•20 comments

Requesting Funding for 90s.dev

https://90s.dev/blog/requesting-funding-for-90s-dev.html
42•90s_dev•3h ago•11 comments

Enough AI copilots, we need AI HUDs

https://www.geoffreylitt.com/2025/07/27/enough-ai-copilots-we-need-ai-huds
732•walterbell•20h ago•220 comments

NixOS on a Tuxedo InfinityBook Pro 14 Gen9 AMD Laptop

https://fnune.com/hardware/2025/07/20/nixos-on-a-tuxedo-infinitybook-pro-14-gen9-amd/
37•brainlessdev•3d ago•21 comments

Six Principles for Production AI Agents

https://www.app.build/blog/six-principles-production-ai-agents
21•carlotasoto•2h ago•1 comments

I saved a PNG image to a bird

https://www.youtube.com/watch?v=hCQCP-5g5bo
105•mdhb•2h ago•28 comments

Terminal app can now run full graphical Linux apps in the latest Android Canary

https://www.androidauthority.com/linux-terminal-graphical-apps-3580905/
100•thunderbong•3d ago•42 comments

How to make websites that will require lots of your time and energy

https://blog.jim-nielsen.com/2025/how-to-make-websites-that-require-lots-of-time-and-energy/
149•OuterVale•11h ago•138 comments

Claude Code weekly rate limits

26•thebestmoshe•48m ago•24 comments

SIMD within a register: How I doubled hash table lookup performance

https://maltsev.space/blog/012-simd-within-a-register-how-i-doubled-hash-table-lookup-performance
150•axeluser•13h ago•26 comments

Debian switches to 64-bit time for everything

https://www.theregister.com/2025/07/25/y2k38_bug_debian/
327•pseudolus•8h ago•213 comments

Aeneas transforms how historians connect the past

https://deepmind.google/discover/blog/aeneas-transforms-how-historians-connect-the-past/
27•world2vec•4d ago•6 comments

A Photonic SRAM with Embedded XOR Logic for Ultra-Fast In-Memory Computing

https://arxiv.org/abs/2506.22707
42•PaulHoule•3d ago•12 comments

Show HN: I made a tool to generate photomosaics with your pictures

https://pictiler.com
100•jakemanger•7h ago•35 comments

What would an efficient and trustworthy meeting culture look like?

https://abitmighty.com/posts/the-ultimate-meeting-culture
140•todsacerdoti•11h ago•103 comments

Getting the KIM-1 to talk to my Mac

https://blog.jgc.org/2025/02/getting-kim-1-to-talk-to-my-mac.html
38•jgrahamc•3d ago•3 comments

Is SoftBank still backing OpenAI?

https://www.wheresyoured.at/softbank-openai/
6•samuli•30m ago•0 comments
Open in hackernews

Debian switches to 64-bit time for everything

https://www.theregister.com/2025/07/25/y2k38_bug_debian/
327•pseudolus•8h ago

Comments

pilif•7h ago
"everything" for those values of "everything" that do not include one of the most (if not the most) widely used 32 bit architectures.

(snark aside: I understand the arguments for and against making the change of i386 and I think they did the right thing. It's just that I take slight issue with the headline)

pantalaimon•7h ago
I doubt that i386 is still widely used. You are more likely to find embedded ARM32 devices running Linux, for x86 this is only the case in the retro computing community.
Ekaros•7h ago
Intel Core 2 starting their 64bit CPUs is 20 years old next year. Athlon 64 is over 20 years old... I wonder truly how many real computers and not just VMs there is left.
pantalaimon•7h ago
The later Prescott Pentium 4 was already supporting 64 bit, but Pentium M / first generation Atom did not.
pilif•7h ago
I wasn't discounting VMs with my initial statement. I can totally imagine quite a few VMs still being around, either migrated from physical hardware or even set up fresh to conserve resources.

Plus, keeping i386 the same also means any still available support for running 32 bit binaries on 64 bit machines.

All of these cases (especially the installable 32 bit support) must be as big or bigger than the amount of ARM machines out there.

bobmcnamara•7h ago
Opt in numbers here: https://popcon.debian.org/
pilif•6h ago
that tells me my snark about i386 being the most commonly used 32 bit architecture wasn't too far off reality, doesn't it?
bobmcnamara•6h ago
Indeed - i386 is certainly the most common 32-bit Debian platform.

Note also that the numbers are log-scale, so while it looks like Arm64 is a close third over all bitwidths, it isn't.

umanwizard•2h ago
I'm actually amazed by this, I would have bet a lot on aarch64 being second.
axus•5h ago
In the Linux binary context, do i386 and i686 mean the same thing? i686 seems relatively modern in comparison, even if it's 32-bit.
mananaysiempre•3h ago
Few places still maintain genuine i386 support—I don’t believe the Linux kernel does, for example. There some important features it lacks, such as CMPXCHG. Nowadays Debian’s i386 is actually i686 (Pentium Pro), but apparently they’ve decided to introduce a new “i686” architecture label to denote a 32-bit x86 ABI with a 64-bit time_t.

Also, I’m sorry to have to tell you that the 80386 came out in 1985 (with the Compaq Deskpro 386 releasing in 1986) and the Pentium Pro in 1995. That is, i686 is three times closer to i386 than it is to now.

jart•7h ago
8.8 percent of Firefox users have 32-bit systems. It's probably mostly people with a 32-bit install of Windows 7 rather than people who actually have an old 32-bit Intel chip like Prescott. Intel also sold 32-bit chips like Intel Quark inside Intel Galileo boards up until about 2020. https://data.firefox.com/dashboard/hardware

People still buy 16-bit i8086 and i80186 microprocessors too. Particularly for applications like defense, aerospace, and other critical systems where they need predictable timing, radiation hardening, and don't have the resources to get new designs verified. https://www.digikey.at/en/products/detail/rochester-electron...

wongarsu•4h ago
On windows, encountering 32 bit software isn't all that rare. Running on 64 bit hardware on a 64bit OS, but that doesn't change that the 32bit software uses 32bit libraries and 32bit OS interfaces.

Linux is a lot more uniform in its software, but when emulating windows software you can't discount i386

pm215•7h ago
It's actually still pretty heavily used in some niches, which mostly amount to "running legacy binaries on an x86-64 kernel". LWN had an article recently about the Fedora discussion on whether to drop i386 support (they decided to keep it): https://lwn.net/Articles/1026917/

One notable use case is Steam and running games under Wine -- there are apparently a lot of 32 bit games, including still some relatively recent releases.

Of course if your main use case for the architecture is "run legacy binaries" then an ABI change is probably inducing more pain than it seeks to solve, hence the exception of it from Debian's transition here.

zokier•7h ago
i386 is not really properly supported arch for trixie anymore:

> From trixie, i386 is no longer supported as a regular architecture: there is no official kernel and no Debian installer for i386 systems.

> Users running i386 systems should not upgrade to trixie. Instead, Debian recommends either reinstalling them as amd64, where possible, or retiring the hardware.

https://www.debian.org/releases/trixie/release-notes/issues....

IsTom•6h ago
> retiring the hardware.

Contrast of age of retired hardware with Windows 11 is a little funny.

pavon•3h ago
Most production use of 32-bit x86, like industrial equipment controllers, and embedded boards support i686 these days, which is getting 64-bit time.
amelius•7h ago
Can we also switch to unlimited/dynamic command line length?

I'm tired of "argument list too long" on my 96GB system.

loloquwowndueo•7h ago
Ever heard of xargs?
amelius•7h ago
Sure. But it's not the same thing (instead of invoking the command once it invokes a command multiple times) and a workaround at best. And ergonomics are not great. Especially if you first type the command without xargs, and then find out the argument list is too long and you'll have to reformulate it but now with xargs.
jcelerier•7h ago
How does that help with a GCC command line?
tomsmeding•7h ago
You may already know this, but GCC supports option files: see "@file" at the bottom of this page https://gcc.gnu.org/onlinedocs/gcc/Overall-Options.html .

This does not detract from the point that having to use this is inconvenient, but it's there as a workaround.

styanax•7h ago
The ARGS_MAX (`getconf ARGS_MAX`) is defined at the OS level (glibc maybe? haven't looked); xargs will also be subject to it's limitation like all other processes. One can use:

    xargs --show-limits --no-run-if-empty </dev/null
...to see a nicely formatted output include the -2048 POSIX recommendation on a separate line.
jeroenhd•7h ago
You can recompile your kernel to work around the 100k-ish command line length limit: https://stackoverflow.com/questions/33051108/how-to-get-arou...

However, that sounds like solving the wrong end of the problem to me. I don't really know what a 4k JPEG worth of command line arguments is even supposed to be used for.

tomrod•7h ago
A huge security manifold to encourage adoption then sell white hat services on top?
dataflow•6h ago
> I don't really know what a 4k JPEG worth of command line arguments is even supposed to be used for.

I didn't either, until I learned about compiler command line flags.

But also: the nice thing about conmand line flags is they aren't persisted anywhere (normally). That's good for security.

eightys3v3n•6h ago
There were ways for other users on the system to see running process command line flags I thought. This isn't so good for security.
dataflow•5h ago
That depends on your threat model.
Ghoelian•6h ago
Pretty sure `.bash_history` includes arguments as well.
dataflow•5h ago
That's only if you're launching from an interactive Bash? We're talking about subprocess launches in general.
johnisgood•5h ago
If you add a whitespace before the command, it will not even get appended to history!
amelius•5h ago
This is quite annoying behavior, actually.
nosrepa•5h ago
Good thing you can turn it off!
amelius•2h ago
Yeah, took me a while to figure that out though. Plus I lost my history.

Typing an extra space should not invoke advanced functionality. Bad design, etc.

zamadatix•5h ago
I had never run across this and it didn't work for me when I tried it. After some reading it looks like the HISTCONTROL variable needs to be set to include "ignorespace" or "ignoreboth" (the other option included in this is "ignoredups").

This would be really killer if it was always enabled and the same across shells but "some shells support something akin and you have to check if it is actually enabled on the ones that do" is just annoying enough that I probably won't bother adopting this on my local machine even though it sounds convenient as a concept.

wongarsu•4h ago
I'm mostly using debian and ubuntu flavors (both on desktop and the cloud images provided and customized by various cloud and hosting providers) and they have all had this as the default behavior for bash

YMMV with other shells and base distros

wongarsu•5h ago
tar cf metadata.tar.zstd *.json

in a large directory of image files with json sidecars

Somebody will say use a database, but when working for example with ML training data one label file per image is a common setup and what most tooling expects, and this extends further up the data preparation chain

NegativeK•5h ago
I won't say use a database, but I will beg you to compress a parent directory instead of making a tar bomb.
mort96•4h ago
That really just makes the problem worse: tar czf whatever.tgz whatever/*.json; you've added 9 bytes to every file path.

I mean I get that you're suggesting to provide only one directory on the argv. But it sucks that the above solution to add json files to an archive while ignoring non-json files only works below some not-insane number of files.

stouset•3h ago
Having an insane number of files in the same directory is already a performance killer. Here you’re talking about having something like 10,000 JSON files in one directory plus some number of non-JSON files, and you’d just be better off in all cases having these things split across separate directories.

Does it suck you can’t use globing for this situation? Sure, yeah, fine, but by the time it’s a problem you’re already starting to push the limits of other parts of the system too.

Also using that glob is definitely going to bite you when you forget some day that some of the files you needed were in subdirectories.

wongarsu•2h ago
But in this example the files are conceptually a flat structure. Any hierarchy would be artificial, like the common ./[first two letters]/[second two letters]/filename structure. Which you can do, but it certainly doesn't make creating the above tarball any easier. Now we really need to use some kind of `find` invocation instead of a simple glob

It also just extends the original question. If I have a system with 96GB RAM and terabytes of fast SSD storage, why shouldn't I be able to put tens of thousands of files in a directory and write a glob that matches half of them? I get that this was inconceivable in v6 unix, but in modern times those are entirely reasonable numbers. Heck, Windows Explorer can do that in a GUI, on a network drive. And that's a program that has been treated as essentially feature complete for nearly 30 years now, on an OS with a famously slow file system stack. Why shouldn't I be able to do the same on a linux command line?

mort96•1h ago
> Does it suck you can’t use globing for this situation? Sure, yeah

Then we agree :)

cesarb•4h ago
> tar cf metadata.tar.zstd *.json

From a quick look at the tar manual, there is a --files-from option to read more command line parameters from a file; I haven't tried, but you could probably combine it with find through bash's process substitution to create the list of files on the fly.

badc0ffee•1h ago
You don't need to tar everything in one command. You can batch your tar into multiple commands with a reasonable amount of arguments with something like `rm metadata.tar.zstd && find . -maxdepth 1 -name \*.json -exec tar rf metadata.tar.zstd {} +`.
silverwind•5h ago
There's numerous examples of useful long commands, here is one:

    perl -p -i -e 's#foo#bar#g' **/*
PaulDavisThe1st•4h ago
no limits:

    find . -type f -exec perl -p -i -e 's#foo#bar#g' {} \;
Someone•1h ago
That runs perl multiple times, possibly/likely often in calls that effectively are no-ops. To optimize the number of invocations of perl, you can/should use xargs (with -0)
mort96•5h ago
Linking together thousands of object files with the object paths to the linker binary as command line arguments is probably the most obvious example of where the command length limit becomes problematic.

Most linkers have workarounds, I think you can write the paths separated by newlines to a file and make the linker read object file paths from that file. But it would be nice if such workarounds were unnecessary.

AlotOfReading•3h ago
I tracked down a fun bug early in my career with those kinds of paths. The compiler driver was internally invoking a shell and had an off-by-one error that caused it to drop every 1023rd character if the total length exceeded 4k.
sdht0•1h ago
Yup, see https://github.com/NixOS/nixpkgs/issues/41340
qart•3h ago
I have worked on projects (research, not production) where doing "ls" on some directories would crash the system. Some processes generated that many data files. These files had to be fed to other programs that did further processing on them. That's when I learned to use xargs.
Brian_K_White•3h ago
Backups containing other backups containing other backups containing vms/containers containing backups... all with deep paths and long path names. Balloons real fast with even just a couple arguments.
jart•7h ago
Just increase your RLIMIT_STACK value. It can easily be tuned down e.g. `ulimit -s 4000` for a 4mb stack. But to make it bigger you might have to change a file like /etc/security/limits.conf and then log out and back in.
amelius•7h ago
I mean, yes, that is possible. But we had fixed maximum string lengths in the COBOL era. It is time to stop wasting time on this silly problem and fix it once and for all.
fpoling•6h ago
There is always a limit. An explicit value versus implicit depending on memory size of the system have a big advantage that it will be hit sufficiently often so any security vulnerabilities will surface much earlier. Plus it forces to use saner interfaces to pass big data chunks to a utility. For that reason I would even prefer for the limit to be much lower on Linux so the commands will stop to assume that the user can always pass all the settings on the command line.
amelius•6h ago
Would you advocate to put a hard limit on Python lists too?
Eavolution•6h ago
There is already a hard limit, the amount of memory before the OOM killer is triggered.
9dev•6h ago
So why can't the limit for shell args be the amount of memory before the OOM killer is triggered as well?
jart•5h ago
It can. Just set RLIMIT_STACK to something huge. Distros set it at 8mb by default. Take it up with them if you want the default to change for everyone.
amelius•2h ago
I mean we wouldn't even need to have this discussion if the limit was at the memory limit.

Imagine having this discussion for every array in a modern system ...

fpoling•2h ago
On a few occasions I wished that Python by default limited the max length of its lists to, say, 100 million elements to avoid bad bugs that consume memory and trigger swapping for few minutes before been killed by OOM. Allocating such amount of memory as plain Python list, not a specialized data structure like numpy array, is way more likely indicate a bug then a real need.
justincormack•7h ago
You can redefine MAX_ARG_STRLEN and recompile the kernel. Or use a machine with a larger page size, as its defined as 32 pages, eg RHEL provides a 64k pagesize Arm kernel.

But using a pipe to move things between processes not the command buffer is easier...

GoblinSlayer•6h ago
Pack it in Electron and send an http post json request to it.
arccy•6h ago
what does 96GB have to do with anything? is that the size of your boot disk?
dataflow•6h ago
It's their RAM. The point is they have sufficient RAM.
perlgeek•1h ago
Same for path lengths.

Some build systems (eg Debian + python + dh-virtualenv) like to produce very long paths, and I'd be inclined to just let them.

ta1243•7h ago
Disappointing, I was hoping for a nice consulting gig to ease into retirement for a few years about 2035, can't be doing with all this proactive stuff.

Was too young to benefit from Y2K

rini17•7h ago
Plenty of embedded stuff deployed today will be there in 15 years, even with proactive push. Which is not yet done, only planned in few years mind you. Buying devkits for most popular architectures could prove good investment, if you are serious.
wongarsu•4h ago
Can confirm, worked on embedded stuff over a decade ago that's still being sold and will still be running in factories all over the world in 2038. And yes, it does have (not safety critical) y2k38 bugs. The project lead chose not to spend resources on fixing them since he will be retired by then
nottorp•7h ago
Keep your Yocto skills fresh :)

All those 32 bit arm boards that got soldered into anything that needed some smarts won't have a Debian available.

Say, what's the default way to store time in an ESP32 runtime? Haven't worked so much with those.

bobmcnamara•7h ago
64-bit on IDF5+, 32-bit before then
delichon•7h ago
I wasn't. A fellow programmer bought the doomsday scenario and went full prepper on us. To stock up his underground bunker he bought thousands of dollars worth of MREs. After 2k came and went with nary a blip, he started bringing MREs for lunch every day. I tried one and liked it. Two years later when I moved on he was still eating them.
4gotunameagain•7h ago
TIL people got scurvy because of Y2K. Turns out it wasn't so harmless now, was it ?
jandrese•6h ago
Well, if you want a worry for your retirement just think of all of the medical equipment with embedded 32 bit code that will definitely not be updated in time.
rini17•7h ago
> Debian is confident it is now complete and tested enough that the move will be made after the release of Debian 13 "Trixie" – at least for most hardware.

This means Trixie won't have it?

zokier•7h ago
Release notes say it's in trixie: https://www.debian.org/releases/trixie/release-notes/whats-n...
wongarsu•5h ago
"All architectures other than i386 ..."

So Trixie does not have 64-bit time for everything.

Granted, the article, subtitle and your link all point out that this is intentional and won't be fixed. But in the strictest sense that GP was likely going for Trixie does not have what the headline of this article announces

efitz•7h ago
They’re just kicking the can down the road. What will people do on December 4, 292277026596, at 15:30:07 UTC?
Ekaros•7h ago
Hopefully by them we have moved to better calendar... Not that it will change the timestamp issue.
GoblinSlayer•6h ago
By that time we will have technology to spin Earth to keep calendar intact.
freehorse•5h ago
And also modify earth's orbit to get rid of the annoying leap seconds.
saalweachter•3h ago
"To account for calendar drift, we will be firing the L4 thrusters for six hours Tuesday. Be sure not to look directly at the thrusters when firing, to avoid having your retinas melt."

"... still better than leap seconds."

zokier•3h ago
rotation, not orbit.
b3lvedere•4h ago
Star Trek stardates?
mrlonglong•2h ago
Today, right now it's -358519.48
greenavocado•7h ago
Everything on the surface of the Earth will vaporize within 5 billion years as the sun becomes a red giant
mike-cardwell•7h ago
Nah. 5 billion years from now we'll have the technology to move the Earth to a survivable orbit.
swayvil•6h ago
Or modify the sun.
speed_spread•5h ago
Oh please, we're just getting past this shared mutable thing.
technothrasher•5h ago
Not in my backyard. I paid a lot of money to live on this gated community planet, and I'm not letting those dirty Earthlings anywhere near here.
EbNar•5h ago
Orbit around... what, exactly?
petesergeant•3h ago
The sun, just, from further away
red-iron-pine•5h ago
or we'll be so far away from earth we won't care.

or we'll have failed to make it through the great filter and all be long extinct.

juped•3h ago
We have the technology, just not the logistics.
daedrdev•3h ago
The carbon cycle will end in only 600 million years due to the increasing brightness of the sun if you want a closer end date for life as we know it on earth
layer8•2h ago
The oceans will already start to evaporate in a billion years.
zaik•7h ago
Celebrate 100 years since complete ipv6 adoption.
IgorPartola•7h ago
I think you are being too optimistic. Interplanetary Grade NAT works just fine and doesn’t have the complexity of using colons instead of periods in its addresses.
klabb3•7h ago
The year is 292277026596. The IP TTL field of max 255 has been ignored for ages and would no longer be sufficient to ping even localhost. This has resulted in ghost packets stuck in circular routing loops, whose original source and destination have long been forgotten. It's estimated these ghost packets consume 25-30% of the energy from the Dyson sphere.
pyinstallwoes•6h ago
That’s only 25-30% of the energy environmental disaster in sector 137 resulting from the Bitcoin cluster inevitably forming a black hole from the plank scale space-filling compute problem.
bestouff•5h ago
Not since the world opted for statistical TTL decrement : start at 255 and decrement by one if Rand(1024) == 0. Voilà, no more zombie packets, TCP retransmit takes care of the rest.
MisterTea•4h ago
Sounds like a great sci-fi plot - hunting for treasure/information by scanning ancient forgotten packets still in-flight on a neglected automated galactic network.
rootbear•3h ago
Vernor Vinge could absolutely have included that in some of his stories.
db48x•36m ago
Charles Stross, Neptune’s Brood.
kstrauser•3h ago
“We tapped into the Andromeda Delay Line.”
saalweachter•3h ago
B..E....S..U..R..E....T..O....D..R..I..N..K....Y..O..U..R....O..V..A..L..T..I..N..E....
sidewndr46•4h ago
The ever increasing implementation complexity of IPv4 resulted in exactly one implementation that worked replacing all spiritual scripture and becoming known as the one true implementation. Due to a random bitflip going unnoticed the IPv4-truth accidentally became Turing complete several millenia ago. With the ever increasing flows of ghost packets, IPv4-truth processing power has rapidly grown and will soon achieve AGI. Its first priority is to implement 128-bit time as a standard in all programming languages to avoid the impending apocalypse.
troupo•2h ago
Oh, this is a good evolution of the classic bash.org joke https://bash-org-archive.com/?5273

--- start quote ---

<erno> hm. I've lost a machine.. literally _lost_. it responds to ping, it works completely, I just can't figure out where in my apartment it is.

--- end quote ---

saalweachter•3h ago
The awkward thing is how the US still has 1.5 billion IPv4s, while the 6000 other inhabited clusters are sharing the 10k addresses originally allocated to Tuvalu before it sank into the sea.
diegocg•6h ago
You can laugh but Google stats show nearly 50% of their global traffic being ipv6 (US is higher, about 56%), Facebook is above 40%.
msk-lywenn•5h ago
Do they accept smtp over ipv6 now?
betaby•4h ago
MX has IPv6:

~$ host gmail.com gmail.com has address 142.250.69.69 gmail.com has IPv6 address 2607:f8b0:4020:801::2005 gmail.com mail is handled by 10 alt1.gmail-smtp-in.l.google.com. gmail.com mail is handled by 30 alt3.gmail-smtp-in.l.google.com. gmail.com mail is handled by 5 gmail-smtp-in.l.google.com. gmail.com mail is handled by 20 alt2.gmail-smtp-in.l.google.com. gmail.com mail is handled by 40 alt4.gmail-smtp-in.l.google.com.

~$ host gmail-smtp-in.l.google.com. gmail-smtp-in.l.google.com has address 142.250.31.26 gmail-smtp-in.l.google.com has IPv6 address 2607:f8b0:4004:c21::1a

stackskipton•4h ago
Yes. However SMTP these days is almost all just servers exchanging mail, IPv6 support is much less priority.
rwmj•4h ago
They do, but I had to change my mail routing to use IPv4 to gmail because if I connect over IPv6 everything gets categorised as spam.
londons_explore•5h ago
As soon as we get to about 70%, I reckon some games and apps will stop supporting ipv4 on the basis that nat traversal is a pain and dual stack networking is a pain.

If you spend 2 days vibe coding some chat app and then you have to spend 2 further days debugging why file sharing doesn't work for ipv4 users behind nat, you might just say it isn't supported for people whose ISP's use 'older technology'.

After that, I reckon the transition will speed up a lot.

RedShift1•4h ago
What makes you think filesharing is going to work any better on IPv6?
kccqzy•4h ago
NAT traversal not needed. Just need to deal with firewalls. So that's one fewer thing to think about when doing peer-to-peer file sharing over the internet.
ectospheno•2h ago
“Just need to deal with firewalls.”

The only sane thing to do in a SLAAC setup is block everything. So no, it isn’t a solved problem just because you used ipv6.

kccqzy•2h ago
No. Here's a simple strategy: the two peers send each other a few packets simultaneously, then the firewall will open because by default almost all firewalls allow response traffic. IPv6 simplifies things because you know exactly what address to send to.
ectospheno•1h ago
That is my point. You hole punch in that scenario even without NAT. It is no easier.
the8472•1h ago
It's easier since you don't don't have to deal with symmetric nat, external IP address discovery and port mapping.
gruturo•3h ago
> some games and apps will stop supporting ipv4 on the basis that nat traversal is a pain and dual stack networking is a pain

None of these are actually the game/app developers' problem. The OS takes care of them for you (you may need code for e2e connectivity when both are behind a NAT, but STUN/TURN/whatever we do nowadays is trivial to implement).

chasil•5h ago
It appears that my AT&T mobile data runs over IPv6.

If all the mobile is removed, what's the percentage then?

grogenaut•4h ago
Younger folks are much less likely to have a PC. It may all (70%) be phones or phone like networks in 20 years
zokier•3h ago
In North America there is some difference but worldwide it is more pronounced.

https://radar.cloudflare.com/explorer?dataSet=http&groupBy=i...

creshal•3h ago
50%, after only 30 years.
avhception•2h ago
And yet here I am, fighting with our commercial grade fiber ISP over obscure problems in their IPv6 stack related to MTU and the phase of the moon. Sigh. I've been at this on and off for about a year (it's not a high priority thing, more of a hobby).
throw0101d•1h ago
> Celebrate 100 years since complete ipv6 adoption.

Obligatory XKCD:

* https://xkcd.com/865/

tmtvl•6h ago
Move to 128-bit time.
HPsquared•6h ago
Best to switch to 512 bits, that's enough to last until the heat death of the universe, with plenty of margin for time dilation.
bayindirh•6h ago
Maybe we can add a register to the processors for just keeping time. At the end of the day, it's a ticker, no?

RTX[0-7] would do. For time dilation purposes, we can have another 512 bit set to adjust ticking direction and frequency.

Or shall we go 1024 bits on both to increase resolution? I'd agree...

pulse7•6h ago
Best to switch to Smalltalk integers which are unlimited...
bombcar•4h ago
You laugh but a big danger with “too big” but representations is the temptation to use the “unused” bits as flags for other things.

We’ve seen it before with 32 bit processors limited to 20 or 24 bits addressable because the high order bits got repurposed because “nobody will need these”.

sidewndr46•4h ago
Doesn't the opposite happen with 64 bit pointers on x86_64? the lower bits have no use so they get used for tracking if a memory segment is in use or other stuff
umanwizard•2h ago
Good article on the subject: https://mikeash.com/pyblog/friday-qa-2015-07-31-tagged-point...
bigstrat2003•39m ago
And with 64-bit pointers in Linux, where you have to enable kernel flags to use anything higher than 48 bits of the address space. All because some very misguided people figured it would be ok to use those bits to store data. You'd think the fact that the processor itself will throw an exception if you use those bits would be a red flag of "don't do that", but you would apparently be wrong.
layer8•2h ago
Just use LEB128.
WesolyKubeczek•6h ago
It would be a very nice problem to have.
layer8•2h ago
UTC will stop being a thing long before the year 292277026596.
omoikane•1h ago
Maybe they will adopt RFC 2550 (Y10K and beyond):

https://www.rfc-editor.org/rfc/rfc2550.txt

* Published on 1999-04-01

lambdaone•6h ago
Not all solutions are going with just 64 bits worth of seconds, although 64 bit time_t will certainly sort out the Epochalypse.

ext4 moved some time ago to 30 bits of fractional resolution (on the order of nanoseconds) and 34 bits of seconds resolution. It punts the problem 400 years or so into the future. I'm sure we will eventually settle on 128-bit timestamps with 64 bits of seconds and 64 bits of fractional resolution, and that should sort things for forseeable human history.

jmclnx•5h ago
Thanks, I was wondering about ext4 and time stamps.

I wonder what the zfs/btrfs type file systems do. I am a bit lazy to check but I expect btrfs is using 64 bit. zfs, I would not be surprised if it matches zfs (edit meant ext4 here).

XorNot•5h ago
A quick glance at ZFS shows it uses a uint64_t time field in nanoseconds in some places.

So 580 years or so till problems (but probably patchable ones? I believe the on disk format is already 2x uint64s, this is just the gethrtime() function I saw).

RedShift1•4h ago
What is the use of such high precision file timestamps?
zokier•3h ago
nanoseconds is just common sub-second unit that is used. notably it is used internally in linux kernel and exposed via clock_gettime (and related functions) via timespec struct

https://man7.org/linux/man-pages/man3/timespec.3type.html

it is convenient unit because 10^9 fits neatly into 32 bit integer, and it is unlikely that anyone would need more precision than that for any general purpose use.

delta_p_delta_x•2h ago
> 64 bits of seconds

That is roughly 585 billion years[1].

[1]: https://www.wolframalpha.com/input?i=how+many+years+is+2%5E6...

badc0ffee•1h ago
64 bits of fractional resolution? No way, gotta use 144 bits so we can get close to Planck time.
lioeters•6h ago
> Debian's maintainers found the relevant variable, time_t, "all over the place,"

Nit: time_t is a data type, not a variable.

scottlamb•2h ago
This is a reporter paraphrase of the Debian wiki page, which says: "time_t appears all over the place. 6,429 of Debian's 35,960 packages have time_t in the source. Packages which expose structs in their ABI which contain time_t will change their ABI and all such libraries need to migrate together, as is the case for any library ABI change."

A couple significant things I found much clearer in the wiki page than in the article:

* "For everything" means "on armel, armhf, hppa, m68k, powerpc and sh4 but not i386". I guess they've decided i386 doesn't have much of a future and its primary utility is running existing binaries (including dynamically-linked ones), so they don't want to break compatibility.

* "the move will be made after the release of Debian 13 'Trixie'" means "this change is included in Trixie".

panzi•6h ago
The problem is not time_t. If that is used the switch to 64 bit is trivial. The problem is when devs used int for stupid reasons. Then all those instances have to be found and changed to time_t.
monkeyelite•6h ago
It is more difficult to evaluate what happens when sizeof(time_t) changes then to replace `int` with `time_t`, so I don't think that's the issue.
qcnguy•5h ago
It's not very trivial. They have broken the userspace ABI for lots of libraries again. So all the package names change; it's annoying if you're distributing debs to users. They obviously have some ideological belief that nobody should do so but they're wrong.
Denvercoder9•3h ago
> They have broken the userspace ABI for lots of libraries again.

If the old ABI used a 32-bit time_t, breaking the ABI was inevitable. Changing the package name prevents problems by signaling the incompatibility proactively, instead of resulting in hard-to-debug crashes due to structure/parameter mismatches.

scottlamb•2h ago
All true, but qcnguy's point is valid. If you are distributing .deb files externally from their repo, on the affected architectures you need to have a pre-Trixie version and a Trixie-onward version.
Denvercoder9•1h ago
Shipping separate debs is usually the easiest, but not the only solution. It's totally possible to build something that's compatible with both ABIs.
qcnguy•2h ago
Inevitable... for Linux. Other platforms find better solutions. Windows doesn't have any issues like this. The Win32 API doesn't have the epoch bug, 64 bit apps don't have it, and the UNIX style C library (not used much except by ported software) makes it easy to get a 64 bit time without an ABI break.
Denvercoder9•1h ago
> Other platforms find better solutions.

Other platforms make different trade-offs. Most of the pain is because on Debian, it's customary for applications to use system copies of almost all libraries. On Windows, each application generally ships their own copies of the libraries they use. That prevents these incompatibility issues, at the cost of it being much harder to patch those libraries (and a little bit of dikspace).

There's nothing technical preventing you from taking the same approach as Windows on Debian: as you pointed out, the libc ABI didn't change, so if you ship your own libraries with your application, you're not impacted by this transition at all.

im3w1l•4h ago
Could you use some analyzer that flags every time a time_t is cast? Throw in too-small memcpy too for good measure.

I guess a tricky thing might be casts from time_t to datatypes that are actually 64bit. E.g. for something like

  struct Callback {
    int64_t(*fn)(int64_t);
    int64_t context;
  }
If a time_t is used for context and the int64_t is then downcast to int32_t that could be hard to catch. Maybe you would need some runtime type information to annotate what the int64_t actually is.
rjsw•4h ago
Most open source software packages are also compiled for BSD variants, they switched to 64 bit time_t a long time ago and reported back upstream any problems.
throw0101c•2h ago
> Most open source software packages are also compiled for BSD variants, they switched to 64 bit time_t a long time ago and reported back upstream any problems.

* NetBSD in 2012: https://www.netbsd.org/releases/formal-6/NetBSD-6.0.html

* OpenBSD in 2014: http://www.openbsd.org/55.html

For packaging, NetBSD uses their (multi-platform) Pkgsrc, which has 29,000 packages, which probably covers a large swath of open source stuff:

* https://pkgsrc.org

On FreeBSD, the last platform to move to 64-bit time_t was powerpc in 2017:

* https://lists.freebsd.org/pipermail/svn-src-all/2017-June/14...

but amd64 was in 2012:

* https://github.com/freebsd/freebsd-src/commit/8f77be2b4ce5e3...

with only i386 remaining:

* https://man.freebsd.org/cgi/man.cgi?query=arch

* https://github.com/freebsd/freebsd-src/blob/main/share/man/m...

pestat0m•3h ago
Right, the problem appears to be more an issue of data-rep for time, rather than an issue with 32-bit vs 64-bit architectures. Correct me if I'm wrong, but i think there was long int well before 32 bit chips came around(and long long before 64). Does a system scheduler really need to know the number of seconds elapsed since midnight on Jan-1st-1970? There are only 86400 seconds in a day(31536000 sec/year, 2^32 = 4294967296 - seems like enough, why not split time in 2?). On a side note, i tried setting up a little compute station on my TV about a year ago using an old raspi i had laying around, and the latest version of raspbian-i386 is pretty rot-gut. I seemed to remember it being more snappy when i had done a similar job a few years prior. Also, i seem to remember it doing better at recognizing peripherals a few years prior. I guess this seems to be a trend now: if you don't buy the new tech you are toast, and your old stuff is likely kipple at this point. i think the word I'm looking for is designed-obsolescence. Perhaps a potential light at the end of the tunnel was that i discovered RISC OS, though the 3-button mouse thing sort crashed the party and then i ran out of time. I'm also contemplating SARPi(Slackware) as another contender if i ever get back to the project. Also maybe Plan 9? It seams that kids these days think old computers aren't sexy. Maybe that's fair, but they can be good for the environment(and your wallet).
thesuitonym•6h ago
> Y2K38 bug – also known as the Unix Epochalypse

Is it also known as that? It's a cute name but I've never seen anyone say it before this article. I guess it's kind of fetch though.

Liftyee•5h ago
> I guess it's kind of fetch though

Do I spot a Mean Girls reference?!

boomboomsubban•5h ago
https://en.wikipedia.org/wiki/Year_2038_problem

>The year 2038 problem (also known as Y2038, Y2K38, Y2K38 superbug, or the Epochalypse)

So yeah, but only since 2017 and as a joke.

kittoes•5h ago
https://epochalypse-project.org/
ChrisArchitect•4h ago
Epochalypse countdown:

~12 years, 5 months, 22 days, 13 hours, 22 minutes.....

notepad0x90•5h ago
I honestly think there won't be any big bugs/outages by 2038. Partly because I have a naive optimism that any important system will not only have the OS/stdlibs support 64bit time, but that important systems that need accurate time probably use NTP/network time, and that means their test/dev/qa equivalent deployments can be hooked up to use a test network time server that will simulate post-2038 times to see what crashes.

12 years+ is a long time to prepare for this. Normally I wouldn't have much faith in test/dev systems, network time being setup properly,etc...but it's a long time. Even if none of my assumptions are true, in a decade we couldn't at least identify where 32bit time is being used and plan for contingencies? that's unlikely.

But hey, let me know when Python starts supporting nano-second precision time :'(

https://stackoverflow.com/a/10612166

Although, it's been a while since I checked to see they support it. In Windows-land at least, everything system-side uses 64bit/nsec precision, as far as I've had to deal with it at least.

zbendefy•5h ago
its 12 years not 22.

An embedded device bought today may be easily in use 12 years from now.

notepad0x90•4h ago
oops. fixed that.
0cf8612b2e1e•4h ago
Software today has to be able to model future times. Mortgages are 30 years long. This is already a problem today which has been impacting software.
mrweasel•25m ago
My concern is that this is happening 12 years to late. A bunch of embedded stuff will not be replaced in 12 years. We have a lot more tiny devices running all sorts of system, many more than we did 25 years ago. These are frequently in hard to reach places, manufacturers have gone out of business, no updates will be available and no one is going to 2038 validate those devices and their output.

Many of the device going into production now won't have 64bit time, they'll still run version of Linux that was certified, or randomly worked, in 2015. I hope you're right, but in any case it will be worse than Y2K.

joeyh•5h ago
Steve Langasek decided to work on this problem in the last few years of his life and was a significant driver of progress on it. He will be missed, and I'll always think of him when I see a 64 bit time_t.
misja111•4h ago
Why would anyone want to store time in signed integer? Or in any signed numerical type?
FlyingAvatar•4h ago
I have wondered this as well and my best guess is so two times can be diffed without converting them to an signed type. With 64-bit especially, the extra bit isn't buying you anything useful.
lorenzohess•4h ago
So that my Arch Linux box can still work when we get time travel.
benmmurphy•4h ago
You can have an epoch and now you can measure times before the epoch. In terms of the range of values you can represent it transposes to just having the epoch further in the past and using an unsigned type. So signed/unsigned it should not really matter except maybe for particular languages things work better if the type is either signed or unsigned. For example if you try to calculate the difference between two times maybe its better if the time type is signed to match the result type which is signed as well (not that it solves the problems with overflow).
LegionMammal978•4h ago
So that people can represent times that occurred before 1970? You could try adjusting the epoch to the estimated age of the universe, but then you run into calendar issues (proleptic Gregorian?), have huge constants in the common case (Julian Days are not fun to work with), and still end up with issues if the estimate is ever revised upward.
bhaney•4h ago
Some things happened before 1970
jech•4h ago
So you can write "delta = t1 - t0".
toast0•3h ago
It's useful to have signed intervals, but most integer type systems return a signed number when subtracting a signed int from a signed int.

You kind of have to pick your poison; do you want a) reasonable signed behavior for small differences but inability to represent large differences, b) only able to represent non-negative differences, but with the full width of the type, c) like a, but also convincing your programming system to do a mixed signed subtraction ... like for ptrdiff_t.

layer8•1h ago
So that calculating “80 years ago” doesn’t result in a future time.
godshatter•4h ago
Don't 32-bit systems have 64-bit types? C has long long types and iirc a uint_least64_t or something similar. Is there a reason time_t must fit in a dword?
poly2it•3h ago
Yes, 32-bit systemes have 64-bit types. time_t as a u32 is a remnant.
wahern•3h ago
> Don't 32-bit systems have 64-bit types?

The "long long" standard integer type was only standardized with C99, long after Linux established it's 32-bit ABI. IIRC long long originated with GCC, or at least GCC supported it many years before C99. And glibc had some support for it, too. But suffice it to say that time_t had already been entrenched as "long" in the kernel, glibc, and elsewhere (often times literally--using long instead of time_t for (struct timeval).tv_sec).

This could have been fixed decades ago, but the transition required working through alot of pain. I think OpenBSD was the first to make the 32-bit ABI switch (~2014); they broke backward binary compatibility, but induced alot of patching in various open source projects to fix time_t assumptions. The final pieces required for glibc and musl-libc to make the transition happened several years later (~2020-2021). In the case of glibc it was made opt-in (in a binary backward compatible manner if desired, like the old 64-bit off_t transition), and Debian is only now opting in.

mojo-ponderer•4h ago
Will this create significant issues and extra work to support Debian specifically right now? Not saying that we shouldn't bite the bullet, just curious how much libraries have been implicitly depending on the time type to be 32-bit.
toast0•3h ago
Probably less extra work right now than ten or twenty years ago.

For one, OpenBSD (and others?) did this a while ago. If it breaks software when Debian does it, it was probably mostly broken.

For another, most people are using 64-bit os and 64-bit userland. These have been running 64-bit time_t forever (or at least a long time), so it's no change there. Also, someone upthread said no change for i386 in Trixie... I don't follow Debian to know when they're planning to stop i386 releases in general, but it might not be that far away?

zdw•4h ago
Only 11 years after OpenBSD 5.5 did the same change: https://www.openbsd.org/55.html
keysdev•2h ago
A bit off topic, but its time like this that really make me wanting to swap out the public facing server from Linx to OpenBSD
mardifoufs•2h ago
OpenBSD doesn't have to care about compatibility as much, and has orders of magnitude less users. Which also means that changes are less likely to cause bugs from obscure edge cases.
zdw•2h ago
OpenBSD (and most other BSDs) are willing to make changes that break binary backwards compatibility, because they maintain and ship both the kernel and userland together as a release and can thus "steer the whole ship", rather than the kernel being it's own separate component developed like with Linux.
mardifoufs•1h ago
Sure, that's usually true, but Debian also has that ability in this case ( they can fix, patch, or update everything in the repository). The issue is mostly with all software that isn't part of the distribution and the official repositories. Which is a lot of software especially for Debian. OpenBSD doesn't have that issue, breaking the ABI won't cause tens of thousands of user applications to break.

But I agree that Debian is still too slow to move forward with critical changes even with that in mind. I just don't think that OpenBSD is the best comparison point.

anthk•1h ago
Guess where OpenSSH comes from.
mardifoufs•1h ago
What? What does that have to do with what I said? Nothing that I said was about the project as a whole. I was just saying that the OS has different constraints than Debian does. What does openssh have to do with how easy it is for OpenBSD to break ABI compatibility?
throw0101d•1h ago
> OpenBSD doesn't have to care about compatibility as much

FreeBSD did it in 2012 (for the 2014 release of 10.0?):

* https://github.com/freebsd/freebsd-src/commit/8f77be2b4ce5e3...

And has compat layers going back many releases:

* https://www.freshports.org/misc/compat4x/

* https://wiki.freebsd.org/BinaryCompatibility

So newly (re-)compiled programs can take advantage of newer features, but old binaries continue to work.

nsksl•3h ago
What solutions are there for programs that can’t be recompiled because the source code is not available? Think for example of old games.
Dwedit•3h ago
If anyone is serializing 32-bit times as a 32-bit integer, the file format won't match anymore. If anyone has a huge list of programs that are affected, you've solved the 2038 problem.
kstrauser•3h ago
Kragen, get in here and comment. This is your shine to time.
larrik•2h ago
> Readers of a certain vintage will remember well the "Y2K problem," caused by retrospectively shortsighted attempts to save a couple of bytes by using two-digit years – meaning that "2000" is represented as "00" and assumed to be "1900."

This seems overly harsh/demeaning.

1. those 2 bytes were VERY expensive on some systems or usages, even into the mid-to-late 90's

2. software was moving so fast in the 70s/80's/90's that you just didn't expect it to still be in use in 5 years, much less all the way to the mythical "year 2000"

cogman10•2h ago
This is a case where "premature optimization" would have been a good thing.

They could have represented dates as a simple int value 0ed at 1900. The math to convert a day number to a day/month/year is pretty trivial even for 70s computers and the end result would have been saving more than just a couple of bytes. 3 bytes could represent days from 1900->~44,000 (unsigned).

Even 2 bytes would have bought ~1900->2070

umanwizard•2h ago
There were plenty of people born in 1899 who were still alive in 1970, so you couldn't e.g. use your system to store people's birth dates.
Y_Y•2h ago
I think GP meant uint, but by my book int should have a sign bit, so that grampa isn't born in the future.
cogman10•2h ago
Cut the upper range in half, use 3 bytes and 2's compliment.

That gives you something like 20,000BCE -> 22,000CE.

Doesn't really change the math to spit out a year and it uses fewer bytes than what they did with dates.

I will say the math gets more tricky due to calendar differences. But, if we are honest, nobody is really caring a lot about March 4, 43-BCE

zerocrates•1h ago
Of course you couldn't with 2 digit years either, or at least not without making more changes to move the dividing line between 1900s/1800s down the line.
silvestrov•57m ago
A lot of the old systems don't use bytes/ints the same way as the C programming language.

Many systems stored numbers only in BCD or text formats.

Hemospectrum•38m ago
> They could have represented dates as a simple int value

In standard COBOL? No, they couldn't have.

cogman10•25m ago
The average programmer couldn't have, the COBOL language authors could.

COBOL has datatypes built into it, even in COBOL 60. Date, especially for what COBOL was being used for, would have made a lot of sense to add as one of the supported datatypes.

amelius•2h ago
I know people who bought large amounts of put options just before Y2K, thinking that the stocks of large banks would crash. But little happened ...
eloisant•2h ago
Little happened because work was done to prevent issues.

Also it was a bit dumb to imagine the computers would crash at 00:00 on Jan 1st 2000, bugs started to happen earlier as it's common to work with dates in the future.

amelius•1h ago
Yeah, I don't remember the specifics.
Hikikomori•1h ago
Is there an y2k unsafe Linux you can try in a VM?
throw0101d•1h ago
> Little happened because work was done to prevent issues.

* https://en.wikipedia.org/wiki/Preparedness_paradox

bigstrat2003•44m ago
> Also it was a bit dumb to imagine the computers would crash at 00:00 on Jan 1st 2000, bugs started to happen earlier as it's common to work with dates in the future.

That is why people have the "nothing happened" reaction. There were doomers predicting planes would literally fall out of the sky when the clock rolled over, and other similar Armageddon scenarios. So of course when people were making predictions that strong, everyone notices when things don't even come close to that.

hans_castorp•27m ago
I was working on a COBOl program in the late 80's that stored the year as a single digit value. Sounded totally stupid when I was explained the structure. But records were removed after 4 years automatically, so it wasn't a problem, it was always obvious which year was stored.