frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

"The Fantastic Four" Is the Great American Novel

https://www.tedagame.com/zak-site/Great-American-Novel/Great-American-Novel.html
1•blacksqr•2m ago•0 comments

We Raised $21M to Give Fortune 100 Cloud for AI Agents

https://e2b.dev/blog/series-a
1•conroy•3m ago•0 comments

From XML to JSON to CBOR

https://cborbook.com/introduction/from_xml_to_json_to_cbor.html
1•todsacerdoti•3m ago•0 comments

AI Is Eating the Internet

https://fika.bar/paoramen/ai-is-eating-the-internet-01K10JG1SHGZQHN61HPGWPXN60
2•masylum•5m ago•1 comments

The Quintessential Urban Design of 'Sesame Street'

https://www.nytimes.com/2025/07/28/realestate/sesame-street-design-over-the-years.html
1•jaredwiener•6m ago•0 comments

Why I'm Leaving Design–and What That Says about It's Future

https://www.suffsyed.com/futurememo/why-im-leaving-design
1•suffsyed•9m ago•1 comments

$350M lawsuit: Meta allegedly used torrents of 2,600 adult films for AI

https://torrentfreak.com/copyright-lawsuit-accuses-meta-of-pirating-adult-films-for-ai-training/
2•BinaryWhiz•9m ago•1 comments

Show HN: Appcircle CodePush – Secure Alternative of App Center CodePush

https://appcircle.io/codepush
2•orangepush•10m ago•0 comments

NASA faces brain drain as thousands exit under voluntary resignation scheme

https://www.theregister.com/2025/07/28/nasa_voluntary_exits/
1•rntn•10m ago•1 comments

Libraries Pay More for E-Books. Some States Want to Change That

https://www.nytimes.com/2025/07/16/books/libraries-e-books-licensing.html
1•bookofjoe•10m ago•1 comments

Testing the GCC-based Rust compiler(back end)

https://fractalfir.github.io/generated_html/cg_gcc_bootstrap_2.html
1•todsacerdoti•12m ago•0 comments

Reflections on SoCraTes 2025

https://disintegrated.parts/notes/conferences/socrates/2025.html
1•gpi•12m ago•0 comments

Large-scale processing of within-bone nutrients by Neanderthals 125k years ago

https://www.science.org/doi/10.1126/sciadv.adv1257
1•pulisse•12m ago•0 comments

AI Takeover Might Happen in 2 Years – LessWrong

https://www.lesswrong.com/posts/KFJ2LFogYqzfGB3uX/how-ai-takeover-might-happen-in-2-years
1•9woc•12m ago•0 comments

Show HN: I built a tool that turns OpenAPI specs into an agent

https://www.pitch31.ai/
2•red93•12m ago•0 comments

We built fast UPDATEs for ClickHouse – Part 2: SQL-style UPDATEs

https://clickhouse.com/blog/updates-in-clickhouse-2-sql-style-updates
2•sdairs•13m ago•0 comments

Show HN: I Built a Tool to Visualize Claude Code's LLM Interactions

https://yuyz0112.github.io/claude-code-reverse/visualize.html
1•yz-yu•15m ago•0 comments

A secretive space plane is set to launch and test quantum navigation technology

https://arstechnica.com/space/2025/07/a-secretive-space-plane-is-set-to-launch-and-test-quantum-navigation-technology/
1•Harvesterify•16m ago•0 comments

Show HN: I built a free tool to find valuable expired domains using AI

https://www.pendingdelete.domains
2•hikerell•16m ago•0 comments

Platform to chat with AI characters and solve mysteries: VirtualTalesHub

https://virtualtaleshub.com/en
1•titohacks•17m ago•1 comments

Translating Cython to Mojo, a first attempt

https://fnands.com/blog/2025/sklearn-mojo-dbscan-inner/
1•fnands•18m ago•1 comments

Mantle contributions to global tungsten recycling and mineralization

https://www.nature.com/articles/s43247-025-02471-2
1•PaulHoule•22m ago•0 comments

Six Principles for Production AI Agents

https://www.app.build/blog/six-principles-production-ai-agents
2•carlotasoto•23m ago•1 comments

The 90-Minute Flow Protocol: Using Neuroscience and Claude Code

1•IgorGanapolsky•23m ago•0 comments

Understanding why deterministic output from LLMs is nearly impossible

https://unstract.com/blog/understanding-why-deterministic-output-from-llms-is-nearly-impossible/
1•constantinum•23m ago•0 comments

I saved a PNG image to a bird

https://www.youtube.com/watch?v=hCQCP-5g5bo
10•mdhb•23m ago•2 comments

Scientists hit quantum computer error rate of 0.000015%

https://www.livescience.com/technology/computing/scientists-hit-quantum-computer-error-rate-of-0-000015-percent-a-world-record-achievement-that-could-lead-to-smaller-and-faster-machines
1•lackoftactics•26m ago•0 comments

Dyad.sh – open-source local AI app builder

2•RyanShook•29m ago•0 comments

Show HN: TrackJ – Track your job applications via web, Chrome extension and App

https://www.track-j.com/
3•Andrea11•29m ago•1 comments

Notes on Robot Vacuums

https://rishikeshs.com/robot-vacuums/
1•rishikeshs•31m ago•0 comments
Open in hackernews

Debian switches to 64-bit time for everything

https://www.theregister.com/2025/07/25/y2k38_bug_debian/
275•pseudolus•5h ago

Comments

pilif•5h ago
"everything" for those values of "everything" that do not include one of the most (if not the most) widely used 32 bit architectures.

(snark aside: I understand the arguments for and against making the change of i386 and I think they did the right thing. It's just that I take slight issue with the headline)

pantalaimon•5h ago
I doubt that i386 is still widely used. You are more likely to find embedded ARM32 devices running Linux, for x86 this is only the case in the retro computing community.
Ekaros•5h ago
Intel Core 2 starting their 64bit CPUs is 20 years old next year. Athlon 64 is over 20 years old... I wonder truly how many real computers and not just VMs there is left.
pantalaimon•5h ago
The later Prescott Pentium 4 was already supporting 64 bit, but Pentium M / first generation Atom did not.
pilif•4h ago
I wasn't discounting VMs with my initial statement. I can totally imagine quite a few VMs still being around, either migrated from physical hardware or even set up fresh to conserve resources.

Plus, keeping i386 the same also means any still available support for running 32 bit binaries on 64 bit machines.

All of these cases (especially the installable 32 bit support) must be as big or bigger than the amount of ARM machines out there.

bobmcnamara•4h ago
Opt in numbers here: https://popcon.debian.org/
pilif•4h ago
that tells me my snark about i386 being the most commonly used 32 bit architecture wasn't too far off reality, doesn't it?
bobmcnamara•3h ago
Indeed - i386 is certainly the most common 32-bit Debian platform.

Note also that the numbers are log-scale, so while it looks like Arm64 is a close third over all bitwidths, it isn't.

axus•2h ago
In the Linux binary context, do i386 and i686 mean the same thing? i686 seems relatively modern in comparison, even if it's 32-bit.
mananaysiempre•1h ago
Few places still maintain genuine i386 support—I don’t believe the Linux kernel does, for example. There some important features it lacks, such as CMPXCHG. Nowadays Debian’s i386 is actually i686 (Pentium Pro), but apparently they’ve decided to introduce a new “i686” architecture label to denote a 32-bit x86 ABI with a 64-bit time_t.

Also, I’m sorry to have to tell you that the 80386 came out in 1985 (with the Compaq Deskpro 386 releasing in 1986) and the Pentium Pro in 1995. That is, i686 is three times closer to i386 than it is to now.

jart•4h ago
8.8 percent of Firefox users have 32-bit systems. It's probably mostly people with a 32-bit install of Windows 7 rather than people who actually have an old 32-bit Intel chip like Prescott. Intel also sold 32-bit chips like Intel Quark inside Intel Galileo boards up until about 2020. https://data.firefox.com/dashboard/hardware

People still buy 16-bit i8086 and i80186 microprocessors too. Particularly for applications like defense, aerospace, and other critical systems where they need predictable timing, radiation hardening, and don't have the resources to get new designs verified. https://www.digikey.at/en/products/detail/rochester-electron...

wongarsu•2h ago
On windows, encountering 32 bit software isn't all that rare. Running on 64 bit hardware on a 64bit OS, but that doesn't change that the 32bit software uses 32bit libraries and 32bit OS interfaces.

Linux is a lot more uniform in its software, but when emulating windows software you can't discount i386

pm215•4h ago
It's actually still pretty heavily used in some niches, which mostly amount to "running legacy binaries on an x86-64 kernel". LWN had an article recently about the Fedora discussion on whether to drop i386 support (they decided to keep it): https://lwn.net/Articles/1026917/

One notable use case is Steam and running games under Wine -- there are apparently a lot of 32 bit games, including still some relatively recent releases.

Of course if your main use case for the architecture is "run legacy binaries" then an ABI change is probably inducing more pain than it seeks to solve, hence the exception of it from Debian's transition here.

zokier•4h ago
i386 is not really properly supported arch for trixie anymore:

> From trixie, i386 is no longer supported as a regular architecture: there is no official kernel and no Debian installer for i386 systems.

> Users running i386 systems should not upgrade to trixie. Instead, Debian recommends either reinstalling them as amd64, where possible, or retiring the hardware.

https://www.debian.org/releases/trixie/release-notes/issues....

IsTom•4h ago
> retiring the hardware.

Contrast of age of retired hardware with Windows 11 is a little funny.

pavon•1h ago
Most production use of 32-bit x86, like industrial equipment controllers, and embedded boards support i686 these days, which is getting 64-bit time.
amelius•5h ago
Can we also switch to unlimited/dynamic command line length?

I'm tired of "argument list too long" on my 96GB system.

loloquwowndueo•5h ago
Ever heard of xargs?
amelius•5h ago
Sure. But it's not the same thing (instead of invoking the command once it invokes a command multiple times) and a workaround at best. And ergonomics are not great. Especially if you first type the command without xargs, and then find out the argument list is too long and you'll have to reformulate it but now with xargs.
jcelerier•5h ago
How does that help with a GCC command line?
tomsmeding•5h ago
You may already know this, but GCC supports option files: see "@file" at the bottom of this page https://gcc.gnu.org/onlinedocs/gcc/Overall-Options.html .

This does not detract from the point that having to use this is inconvenient, but it's there as a workaround.

styanax•4h ago
The ARGS_MAX (`getconf ARGS_MAX`) is defined at the OS level (glibc maybe? haven't looked); xargs will also be subject to it's limitation like all other processes. One can use:

    xargs --show-limits --no-run-if-empty </dev/null
...to see a nicely formatted output include the -2048 POSIX recommendation on a separate line.
jeroenhd•4h ago
You can recompile your kernel to work around the 100k-ish command line length limit: https://stackoverflow.com/questions/33051108/how-to-get-arou...

However, that sounds like solving the wrong end of the problem to me. I don't really know what a 4k JPEG worth of command line arguments is even supposed to be used for.

tomrod•4h ago
A huge security manifold to encourage adoption then sell white hat services on top?
dataflow•4h ago
> I don't really know what a 4k JPEG worth of command line arguments is even supposed to be used for.

I didn't either, until I learned about compiler command line flags.

But also: the nice thing about conmand line flags is they aren't persisted anywhere (normally). That's good for security.

eightys3v3n•3h ago
There were ways for other users on the system to see running process command line flags I thought. This isn't so good for security.
dataflow•3h ago
That depends on your threat model.
Ghoelian•3h ago
Pretty sure `.bash_history` includes arguments as well.
dataflow•3h ago
That's only if you're launching from an interactive Bash? We're talking about subprocess launches in general.
johnisgood•3h ago
If you add a whitespace before the command, it will not even get appended to history!
amelius•3h ago
This is quite annoying behavior, actually.
nosrepa•2h ago
Good thing you can turn it off!
zamadatix•2h ago
I had never run across this and it didn't work for me when I tried it. After some reading it looks like the HISTCONTROL variable needs to be set to include "ignorespace" or "ignoreboth" (the other option included in this is "ignoredups").

This would be really killer if it was always enabled and the same across shells but "some shells support something akin and you have to check if it is actually enabled on the ones that do" is just annoying enough that I probably won't bother adopting this on my local machine even though it sounds convenient as a concept.

wongarsu•2h ago
I'm mostly using debian and ubuntu flavors (both on desktop and the cloud images provided and customized by various cloud and hosting providers) and they have all had this as the default behavior for bash

YMMV with other shells and base distros

wongarsu•2h ago
tar cf metadata.tar.zstd *.json

in a large directory of image files with json sidecars

Somebody will say use a database, but when working for example with ML training data one label file per image is a common setup and what most tooling expects, and this extends further up the data preparation chain

NegativeK•2h ago
I won't say use a database, but I will beg you to compress a parent directory instead of making a tar bomb.
mort96•2h ago
That really just makes the problem worse: tar czf whatever.tgz whatever/*.json; you've added 9 bytes to every file path.

I mean I get that you're suggesting to provide only one directory on the argv. But it sucks that the above solution to add json files to an archive while ignoring non-json files only works below some not-insane number of files.

stouset•1h ago
Having an insane number of files in the same directory is already a performance killer. Here you’re talking about having something like 10,000 JSON files in one directory plus some number of non-JSON files, and you’d just be better off in all cases having these things split across separate directories.

Does it suck you can’t use globing for this situation? Sure, yeah, fine, but by the time it’s a problem you’re already starting to push the limits of other parts of the system too.

Also using that glob is definitely going to bite you when you forget some day that some of the files you needed were in subdirectories.

cesarb•2h ago
> tar cf metadata.tar.zstd *.json

From a quick look at the tar manual, there is a --files-from option to read more command line parameters from a file; I haven't tried, but you could probably combine it with find through bash's process substitution to create the list of files on the fly.

silverwind•2h ago
There's numerous examples of useful long commands, here is one:

    perl -p -i -e 's#foo#bar#g' **/*
PaulDavisThe1st•1h ago
no limits:

    find . -type f -exec perl -p -i -e 's#foo#bar#g' {} \;
mort96•2h ago
Linking together thousands of object files with the object paths to the linker binary as command line arguments is probably the most obvious example of where the command length limit becomes problematic.

Most linkers have workarounds, I think you can write the paths separated by newlines to a file and make the linker read object file paths from that file. But it would be nice if such workarounds were unnecessary.

AlotOfReading•1h ago
I tracked down a fun bug early in my career with those kinds of paths. The compiler driver was internally invoking a shell and had an off-by-one error that caused it to drop every 1023rd character if the total length exceeded 4k.
qart•1h ago
I have worked on projects (research, not production) where doing "ls" on some directories would crash the system. Some processes generated that many data files. These files had to be fed to other programs that did further processing on them. That's when I learned to use xargs.
Brian_K_White•29m ago
Backups containing other backups containing other backups containing vms/containers containing backups... all with deep paths and long path names. Balloons real fast with even just a couple arguments.
jart•4h ago
Just increase your RLIMIT_STACK value. It can easily be tuned down e.g. `ulimit -s 4000` for a 4mb stack. But to make it bigger you might have to change a file like /etc/security/limits.conf and then log out and back in.
amelius•4h ago
I mean, yes, that is possible. But we had fixed maximum string lengths in the COBOL era. It is time to stop wasting time on this silly problem and fix it once and for all.
fpoling•4h ago
There is always a limit. An explicit value versus implicit depending on memory size of the system have a big advantage that it will be hit sufficiently often so any security vulnerabilities will surface much earlier. Plus it forces to use saner interfaces to pass big data chunks to a utility. For that reason I would even prefer for the limit to be much lower on Linux so the commands will stop to assume that the user can always pass all the settings on the command line.
amelius•3h ago
Would you advocate to put a hard limit on Python lists too?
Eavolution•3h ago
There is already a hard limit, the amount of memory before the OOM killer is triggered.
9dev•3h ago
So why can't the limit for shell args be the amount of memory before the OOM killer is triggered as well?
jart•2h ago
It can. Just set RLIMIT_STACK to something huge. Distros set it at 8mb by default. Take it up with them if you want the default to change for everyone.
justincormack•4h ago
You can redefine MAX_ARG_STRLEN and recompile the kernel. Or use a machine with a larger page size, as its defined as 32 pages, eg RHEL provides a 64k pagesize Arm kernel.

But using a pipe to move things between processes not the command buffer is easier...

GoblinSlayer•4h ago
Pack it in Electron and send an http post json request to it.
arccy•4h ago
what does 96GB have to do with anything? is that the size of your boot disk?
dataflow•4h ago
It's their RAM. The point is they have sufficient RAM.
ta1243•5h ago
Disappointing, I was hoping for a nice consulting gig to ease into retirement for a few years about 2035, can't be doing with all this proactive stuff.

Was too young to benefit from Y2K

rini17•5h ago
Plenty of embedded stuff deployed today will be there in 15 years, even with proactive push. Which is not yet done, only planned in few years mind you. Buying devkits for most popular architectures could prove good investment, if you are serious.
wongarsu•2h ago
Can confirm, worked on embedded stuff over a decade ago that's still being sold and will still be running in factories all over the world in 2038. And yes, it does have (not safety critical) y2k38 bugs. The project lead chose not to spend resources on fixing them since he will be retired by then
nottorp•5h ago
Keep your Yocto skills fresh :)

All those 32 bit arm boards that got soldered into anything that needed some smarts won't have a Debian available.

Say, what's the default way to store time in an ESP32 runtime? Haven't worked so much with those.

bobmcnamara•4h ago
64-bit on IDF5+, 32-bit before then
delichon•4h ago
I wasn't. A fellow programmer bought the doomsday scenario and went full prepper on us. To stock up his underground bunker he bought thousands of dollars worth of MREs. After 2k came and went with nary a blip, he started bringing MREs for lunch every day. I tried one and liked it. Two years later when I moved on he was still eating them.
4gotunameagain•4h ago
TIL people got scurvy because of Y2K. Turns out it wasn't so harmless now, was it ?
jandrese•3h ago
Well, if you want a worry for your retirement just think of all of the medical equipment with embedded 32 bit code that will definitely not be updated in time.
rini17•5h ago
> Debian is confident it is now complete and tested enough that the move will be made after the release of Debian 13 "Trixie" – at least for most hardware.

This means Trixie won't have it?

zokier•4h ago
Release notes say it's in trixie: https://www.debian.org/releases/trixie/release-notes/whats-n...
wongarsu•2h ago
"All architectures other than i386 ..."

So Trixie does not have 64-bit time for everything.

Granted, the article, subtitle and your link all point out that this is intentional and won't be fixed. But in the strictest sense that GP was likely going for Trixie does not have what the headline of this article announces

efitz•5h ago
They’re just kicking the can down the road. What will people do on December 4, 292277026596, at 15:30:07 UTC?
Ekaros•5h ago
Hopefully by them we have moved to better calendar... Not that it will change the timestamp issue.
GoblinSlayer•4h ago
By that time we will have technology to spin Earth to keep calendar intact.
freehorse•3h ago
And also modify earth's orbit to get rid of the annoying leap seconds.
saalweachter•1h ago
"To account for calendar drift, we will be firing the L4 thrusters for six hours Tuesday. Be sure not to look directly at the thrusters when firing, to avoid having your retinas melt."

"... still better than leap seconds."

zokier•50m ago
rotation, not orbit.
b3lvedere•2h ago
Star Trek stardates?
mrlonglong•4m ago
Today, right now it's -358519.48
greenavocado•4h ago
Everything on the surface of the Earth will vaporize within 5 billion years as the sun becomes a red giant
mike-cardwell•4h ago
Nah. 5 billion years from now we'll have the technology to move the Earth to a survivable orbit.
swayvil•3h ago
Or modify the sun.
speed_spread•3h ago
Oh please, we're just getting past this shared mutable thing.
technothrasher•3h ago
Not in my backyard. I paid a lot of money to live on this gated community planet, and I'm not letting those dirty Earthlings anywhere near here.
EbNar•2h ago
Orbit around... what, exactly?
petesergeant•58m ago
The sun, just, from further away
red-iron-pine•2h ago
or we'll be so far away from earth we won't care.

or we'll have failed to make it through the great filter and all be long extinct.

juped•1h ago
We have the technology, just not the logistics.
daedrdev•1h ago
The carbon cycle will end in only 600 million years due to the increasing brightness of the sun if you want a closer end date for life as we know it on earth
zaik•4h ago
Celebrate 100 years since complete ipv6 adoption.
IgorPartola•4h ago
I think you are being too optimistic. Interplanetary Grade NAT works just fine and doesn’t have the complexity of using colons instead of periods in its addresses.
klabb3•4h ago
The year is 292277026596. The IP TTL field of max 255 has been ignored for ages and would no longer be sufficient to ping even localhost. This has resulted in ghost packets stuck in circular routing loops, whose original source and destination have long been forgotten. It's estimated these ghost packets consume 25-30% of the energy from the Dyson sphere.
pyinstallwoes•3h ago
That’s only 25-30% of the energy environmental disaster in sector 137 resulting from the Bitcoin cluster inevitably forming a black hole from the plank scale space-filling compute problem.
bestouff•3h ago
Not since the world opted for statistical TTL decrement : start at 255 and decrement by one if Rand(1024) == 0. Voilà, no more zombie packets, TCP retransmit takes care of the rest.
MisterTea•1h ago
Sounds like a great sci-fi plot - hunting for treasure/information by scanning ancient forgotten packets still in-flight on a neglected automated galactic network.
rootbear•1h ago
Vernor Vinge could absolutely have included that in some of his stories.
kstrauser•1h ago
“We tapped into the Andromeda Delay Line.”
saalweachter•1h ago
B..E....S..U..R..E....T..O....D..R..I..N..K....Y..O..U..R....O..V..A..L..T..I..N..E....
sidewndr46•1h ago
The ever increasing implementation complexity of IPv4 resulted in exactly one implementation that worked replacing all spiritual scripture and becoming known as the one true implementation. Due to a random bitflip going unnoticed the IPv4-truth accidentally became Turing complete several millenia ago. With the ever increasing flows of ghost packets, IPv4-truth processing power has rapidly grown and will soon achieve AGI. Its first priority is to implement 128-bit time as a standard in all programming languages to avoid the impending apocalypse.
troupo•25m ago
Oh, this is a good evolution of the classic bash.org joke https://bash-org-archive.com/?5273

--- start quote ---

<erno> hm. I've lost a machine.. literally _lost_. it responds to ping, it works completely, I just can't figure out where in my apartment it is.

--- end quote ---

saalweachter•1h ago
The awkward thing is how the US still has 1.5 billion IPv4s, while the 6000 other inhabited clusters are sharing the 10k addresses originally allocated to Tuvalu before it sank into the sea.
diegocg•4h ago
You can laugh but Google stats show nearly 50% of their global traffic being ipv6 (US is higher, about 56%), Facebook is above 40%.
msk-lywenn•3h ago
Do they accept smtp over ipv6 now?
betaby•2h ago
MX has IPv6:

~$ host gmail.com gmail.com has address 142.250.69.69 gmail.com has IPv6 address 2607:f8b0:4020:801::2005 gmail.com mail is handled by 10 alt1.gmail-smtp-in.l.google.com. gmail.com mail is handled by 30 alt3.gmail-smtp-in.l.google.com. gmail.com mail is handled by 5 gmail-smtp-in.l.google.com. gmail.com mail is handled by 20 alt2.gmail-smtp-in.l.google.com. gmail.com mail is handled by 40 alt4.gmail-smtp-in.l.google.com.

~$ host gmail-smtp-in.l.google.com. gmail-smtp-in.l.google.com has address 142.250.31.26 gmail-smtp-in.l.google.com has IPv6 address 2607:f8b0:4004:c21::1a

stackskipton•1h ago
Yes. However SMTP these days is almost all just servers exchanging mail, IPv6 support is much less priority.
rwmj•1h ago
They do, but I had to change my mail routing to use IPv4 to gmail because if I connect over IPv6 everything gets categorised as spam.
londons_explore•3h ago
As soon as we get to about 70%, I reckon some games and apps will stop supporting ipv4 on the basis that nat traversal is a pain and dual stack networking is a pain.

If you spend 2 days vibe coding some chat app and then you have to spend 2 further days debugging why file sharing doesn't work for ipv4 users behind nat, you might just say it isn't supported for people whose ISP's use 'older technology'.

After that, I reckon the transition will speed up a lot.

RedShift1•1h ago
What makes you think filesharing is going to work any better on IPv6?
kccqzy•1h ago
NAT traversal not needed. Just need to deal with firewalls. So that's one fewer thing to think about when doing peer-to-peer file sharing over the internet.
gruturo•31m ago
> some games and apps will stop supporting ipv4 on the basis that nat traversal is a pain and dual stack networking is a pain

None of these are actually the game/app developers' problem. The OS takes care of them for you (you may need code for e2e connectivity when both are behind a NAT, but STUN/TURN/whatever we do nowadays is trivial to implement).

chasil•2h ago
It appears that my AT&T mobile data runs over IPv6.

If all the mobile is removed, what's the percentage then?

grogenaut•1h ago
Younger folks are much less likely to have a PC. It may all (70%) be phones or phone like networks in 20 years
zokier•1h ago
In North America there is some difference but worldwide it is more pronounced.

https://radar.cloudflare.com/explorer?dataSet=http&groupBy=i...

creshal•52m ago
50%, after only 30 years.
avhception•15m ago
And yet here I am, fighting with our commercial grade fiber ISP over obscure problems in their IPv6 stack related to MTU and the phase of the moon. Sigh. I've been at this on and off for about a year (it's not a high priority thing, more of a hobby).
tmtvl•4h ago
Move to 128-bit time.
HPsquared•3h ago
Best to switch to 512 bits, that's enough to last until the heat death of the universe, with plenty of margin for time dilation.
bayindirh•3h ago
Maybe we can add a register to the processors for just keeping time. At the end of the day, it's a ticker, no?

RTX[0-7] would do. For time dilation purposes, we can have another 512 bit set to adjust ticking direction and frequency.

Or shall we go 1024 bits on both to increase resolution? I'd agree...

pulse7•3h ago
Best to switch to Smalltalk integers which are unlimited...
bombcar•1h ago
You laugh but a big danger with “too big” but representations is the temptation to use the “unused” bits as flags for other things.

We’ve seen it before with 32 bit processors limited to 20 or 24 bits addressable because the high order bits got repurposed because “nobody will need these”.

sidewndr46•1h ago
Doesn't the opposite happen with 64 bit pointers on x86_64? the lower bits have no use so they get used for tracking if a memory segment is in use or other stuff
WesolyKubeczek•4h ago
It would be a very nice problem to have.
lambdaone•4h ago
Not all solutions are going with just 64 bits worth of seconds, although 64 bit time_t will certainly sort out the Epochalypse.

ext4 moved some time ago to 30 bits of fractional resolution (on the order of nanoseconds) and 34 bits of seconds resolution. It punts the problem 400 years or so into the future. I'm sure we will eventually settle on 128-bit timestamps with 64 bits of seconds and 64 bits of fractional resolution, and that should sort things for forseeable human history.

jmclnx•3h ago
Thanks, I was wondering about ext4 and time stamps.

I wonder what the zfs/btrfs type file systems do. I am a bit lazy to check but I expect btrfs is using 64 bit. zfs, I would not be surprised if it matches zfs (edit meant ext4 here).

XorNot•2h ago
A quick glance at ZFS shows it uses a uint64_t time field in nanoseconds in some places.

So 580 years or so till problems (but probably patchable ones? I believe the on disk format is already 2x uint64s, this is just the gethrtime() function I saw).

RedShift1•1h ago
What is the use of such high precision file timestamps?
zokier•1h ago
nanoseconds is just common sub-second unit that is used. notably it is used internally in linux kernel and exposed via clock_gettime (and related functions) via timespec struct

https://man7.org/linux/man-pages/man3/timespec.3type.html

it is convenient unit because 10^9 fits neatly into 32 bit integer, and it is unlikely that anyone would need more precision than that for any general purpose use.

delta_p_delta_x•5m ago
[delayed]
lioeters•4h ago
> Debian's maintainers found the relevant variable, time_t, "all over the place,"

Nit: time_t is a data type, not a variable.

scottlamb•22m ago
This is a reporter paraphrase of the Debian wiki page, which says: "time_t appears all over the place. 6,429 of Debian's 35,960 packages have time_t in the source. Packages which expose structs in their ABI which contain time_t will change their ABI and all such libraries need to migrate together, as is the case for any library ABI change."

A couple significant things I found much clearer in the wiki page than in the article:

* "For everything" means "on armel, armhf, hppa, m68k, powerpc and sh4 but not i386". I guess they've decided i386 doesn't have much of a future and its primary utility is running existing binaries (including dynamically-linked ones), so they don't want to break compatibility.

* "the move will be made after the release of Debian 13 'Trixie'" means "this change is included in Trixie".

panzi•3h ago
The problem is not time_t. If that is used the switch to 64 bit is trivial. The problem is when devs used int for stupid reasons. Then all those instances have to be found and changed to time_t.
monkeyelite•3h ago
It is more difficult to evaluate what happens when sizeof(time_t) changes then to replace `int` with `time_t`, so I don't think that's the issue.
qcnguy•3h ago
It's not very trivial. They have broken the userspace ABI for lots of libraries again. So all the package names change; it's annoying if you're distributing debs to users. They obviously have some ideological belief that nobody should do so but they're wrong.
Denvercoder9•1h ago
> They have broken the userspace ABI for lots of libraries again.

If the old ABI used a 32-bit time_t, breaking the ABI was inevitable. Changing the package name prevents problems by signaling the incompatibility proactively, instead of resulting in hard-to-debug crashes due to structure/parameter mismatches.

scottlamb•16m ago
All true, but qcnguy's point is valid. If you are distributing .deb files externally from their repo, on the affected architectures you need to have a pre-Trixie version and a Trixie-onward version.
im3w1l•2h ago
Could you use some analyzer that flags every time a time_t is cast? Throw in too-small memcpy too for good measure.

I guess a tricky thing might be casts from time_t to datatypes that are actually 64bit. E.g. for something like

  struct Callback {
    int64_t(*fn)(int64_t);
    int64_t context;
  }
If a time_t is used for context and the int64_t is then downcast to int32_t that could be hard to catch. Maybe you would need some runtime type information to annotate what the int64_t actually is.
rjsw•1h ago
Most open source software packages are also compiled for BSD variants, they switched to 64 bit time_t a long time ago and reported back upstream any problems.
throw0101c•20m ago
> Most open source software packages are also compiled for BSD variants, they switched to 64 bit time_t a long time ago and reported back upstream any problems.

* NetBSD in 2012: https://www.netbsd.org/releases/formal-6/NetBSD-6.0.html

* OpenBSD in 2014: http://www.openbsd.org/55.html

For packaging, NetBSD uses their (multi-platform) Pkgsrc, which has 29,000 packages, which probably covers a large swath of open source stuff:

* https://pkgsrc.org

On FreeBSD, the last platform to move to 64-bit time_t was powerpc in 2017:

* https://lists.freebsd.org/pipermail/svn-src-all/2017-June/14...

but amd64 was in 2012:

* https://github.com/freebsd/freebsd-src/commit/8f77be2b4ce5e3...

with only i386 remaining:

* https://man.freebsd.org/cgi/man.cgi?query=arch

* https://github.com/freebsd/freebsd-src/blob/main/share/man/m...

pestat0m•1h ago
Right, the problem appears to be more an issue of data-rep for time, rather than an issue with 32-bit vs 64-bit architectures. Correct me if I'm wrong, but i think there was long int well before 32 bit chips came around(and long long before 64). Does a system scheduler really need to know the number of seconds elapsed since midnight on Jan-1st-1970? There are only 86400 seconds in a day(31536000 sec/year, 2^32 = 4294967296 - seems like enough, why not split time in 2?). On a side note, i tried setting up a little compute station on my TV about a year ago using an old raspi i had laying around, and the latest version of raspbian-i386 is pretty rot-gut. I seemed to remember it being more snappy when i had done a similar job a few years prior. Also, i seem to remember it doing better at recognizing peripherals a few years prior. I guess this seems to be a trend now: if you don't buy the new tech you are toast, and your old stuff is likely kipple at this point. i think the word I'm looking for is designed-obsolescence. Perhaps a potential light at the end of the tunnel was that i discovered RISC OS, though the 3-button mouse thing sort crashed the party and then i ran out of time. I'm also contemplating SARPi(Slackware) as another contender if i ever get back to the project. Also maybe Plan 9? It seams that kids these days think old computers aren't sexy. Maybe that's fair, but they can be good for the environment(and your wallet).
thesuitonym•3h ago
> Y2K38 bug – also known as the Unix Epochalypse

Is it also known as that? It's a cute name but I've never seen anyone say it before this article. I guess it's kind of fetch though.

Liftyee•3h ago
> I guess it's kind of fetch though

Do I spot a Mean Girls reference?!

boomboomsubban•3h ago
https://en.wikipedia.org/wiki/Year_2038_problem

>The year 2038 problem (also known as Y2038, Y2K38, Y2K38 superbug, or the Epochalypse)

So yeah, but only since 2017 and as a joke.

kittoes•3h ago
https://epochalypse-project.org/
ChrisArchitect•2h ago
Epochalypse countdown:

~12 years, 5 months, 22 days, 13 hours, 22 minutes.....

notepad0x90•3h ago
I honestly think there won't be any big bugs/outages by 2038. Partly because I have a naive optimism that any important system will not only have the OS/stdlibs support 64bit time, but that important systems that need accurate time probably use NTP/network time, and that means their test/dev/qa equivalent deployments can be hooked up to use a test network time server that will simulate post-2038 times to see what crashes.

12 years+ is a long time to prepare for this. Normally I wouldn't have much faith in test/dev systems, network time being setup properly,etc...but it's a long time. Even if none of my assumptions are true, in a decade we couldn't at least identify where 32bit time is being used and plan for contingencies? that's unlikely.

But hey, let me know when Python starts supporting nano-second precision time :'(

https://stackoverflow.com/a/10612166

Although, it's been a while since I checked to see they support it. In Windows-land at least, everything system-side uses 64bit/nsec precision, as far as I've had to deal with it at least.

zbendefy•2h ago
its 12 years not 22.

An embedded device bought today may be easily in use 12 years from now.

notepad0x90•2h ago
oops. fixed that.
0cf8612b2e1e•1h ago
Software today has to be able to model future times. Mortgages are 30 years long. This is already a problem today which has been impacting software.
joeyh•2h ago
Steve Langasek decided to work on this problem in the last few years of his life and was a significant driver of progress on it. He will be missed, and I'll always think of him when I see a 64 bit time_t.
misja111•2h ago
Why would anyone want to store time in signed integer? Or in any signed numerical type?
FlyingAvatar•2h ago
I have wondered this as well and my best guess is so two times can be diffed without converting them to an signed type. With 64-bit especially, the extra bit isn't buying you anything useful.
lorenzohess•2h ago
So that my Arch Linux box can still work when we get time travel.
benmmurphy•2h ago
You can have an epoch and now you can measure times before the epoch. In terms of the range of values you can represent it transposes to just having the epoch further in the past and using an unsigned type. So signed/unsigned it should not really matter except maybe for particular languages things work better if the type is either signed or unsigned. For example if you try to calculate the difference between two times maybe its better if the time type is signed to match the result type which is signed as well (not that it solves the problems with overflow).
LegionMammal978•2h ago
So that people can represent times that occurred before 1970? You could try adjusting the epoch to the estimated age of the universe, but then you run into calendar issues (proleptic Gregorian?), have huge constants in the common case (Julian Days are not fun to work with), and still end up with issues if the estimate is ever revised upward.
bhaney•1h ago
Some things happened before 1970
jech•1h ago
So you can write "delta = t1 - t0".
toast0•34m ago
It's useful to have signed intervals, but most integer type systems return a signed number when subtracting a signed int from a signed int.

You kind of have to pick your poison; do you want a) reasonable signed behavior for small differences but inability to represent large differences, b) only able to represent non-negative differences, but with the full width of the type, c) like a, but also convincing your programming system to do a mixed signed subtraction ... like for ptrdiff_t.

godshatter•1h ago
Don't 32-bit systems have 64-bit types? C has long long types and iirc a uint_least64_t or something similar. Is there a reason time_t must fit in a dword?
poly2it•1h ago
Yes, 32-bit systemes have 64-bit types. time_t as a u32 is a remnant.
wahern•1h ago
> Don't 32-bit systems have 64-bit types?

The "long long" standard integer type was only standardized with C99, long after Linux established it's 32-bit ABI. IIRC long long originated with GCC, or at least GCC supported it many years before C99. And glibc had some support for it, too. But suffice it to say that time_t had already been entrenched as "long" in the kernel, glibc, and elsewhere (often times literally--using long instead of time_t for (struct timeval).tv_sec).

This could have been fixed decades ago, but the transition required working through alot of pain. I think OpenBSD was the first to make the 32-bit ABI switch (~2014); they broke backward binary compatibility, but induced alot of patching in various open source projects to fix time_t assumptions. The final pieces required for glibc and musl-libc to make the transition happened several years later (~2020-2021). In the case of glibc it was made opt-in (in a binary backward compatible manner if desired, like the old 64-bit off_t transition), and Debian is only now opting in.

mojo-ponderer•1h ago
Will this create significant issues and extra work to support Debian specifically right now? Not saying that we shouldn't bite the bullet, just curious how much libraries have been implicitly depending on the time type to be 32-bit.
toast0•53m ago
Probably less extra work right now than ten or twenty years ago.

For one, OpenBSD (and others?) did this a while ago. If it breaks software when Debian does it, it was probably mostly broken.

For another, most people are using 64-bit os and 64-bit userland. These have been running 64-bit time_t forever (or at least a long time), so it's no change there. Also, someone upthread said no change for i386 in Trixie... I don't follow Debian to know when they're planning to stop i386 releases in general, but it might not be that far away?

zdw•1h ago
Only 11 years after OpenBSD 5.5 did the same change: https://www.openbsd.org/55.html
keysdev•5m ago
A bit off topic, but its time like this that really make me wanting to swap out the public facing server from Linx to OpenBSD
nsksl•1h ago
What solutions are there for programs that can’t be recompiled because the source code is not available? Think for example of old games.
Dwedit•48m ago
If anyone is serializing 32-bit times as a 32-bit integer, the file format won't match anymore. If anyone has a huge list of programs that are affected, you've solved the 2038 problem.
kstrauser•39m ago
Kragen, get in here and comment. This is your shine to time.
larrik•13m ago
> Readers of a certain vintage will remember well the "Y2K problem," caused by retrospectively shortsighted attempts to save a couple of bytes by using two-digit years – meaning that "2000" is represented as "00" and assumed to be "1900."

This seems overly harsh/demeaning.

1. those 2 bytes were VERY expensive on some systems or usages, even into the mid-to-late 90's

2. software was moving so fast in the 70s/80's/90's that you just didn't expect it to still be in use in 5 years, much less all the way to the mythical "year 2000"

cogman10•2m ago
This is a case where "premature optimization" would have been a good thing.

They could have represented dates as a simple int value 0ed at 1900. The math to convert a day number to a day/month/year is pretty trivial even for 70s computers and the end result would have been saving more than just a couple of bytes. 3 bytes could represent days from 1900->~44,000 (unsigned).

Even 2 bytes would have bought ~1900->2070