I see someone else is a Vernor Vinge fan.
But it's kind of a wild choice for an epoch, when you're very likely to be interfacing with systems whose Epoch starts approximately five months later.
Timezones; sure. But what about before timezones got into use? Or even halfway through - which timezone, considering Königsberg used CET when it was part of Germany, but switched to EET after it became Russian. There's even countries that have timezones differenting by 15 minutes.
And dont get me started on daylight savings time. There's been at least one instance where DST was - and was not - in use in Lebanon - at the same time! Good luck booking an appointment...
Not to mention the transition from Julian calendar to Gregorian, which took place over many, many years - different by different countries - as defined by the country borders at that time...
We've even had countries that forgot to insert a leap day in certain years, causing March 1 to occur on different days altogether for a couple of years.
Time is a mess. Is, and aways have been, and always will be.
An IANA timezone uniquely refers to the set of regions that not only share the same current rules and projected future rules for civil time, but also share the same history of civil time since 1970-01-01 00:00+0. In other words, this definition is more restrictive about which regions can be grouped under a single IANA timezone, because if a given region changed its civil time rules at any point since 1970 in a a way that deviates from the history of civil time for other regions, then that region can't be grouped with the others
I agree that time is a mess. And the 15 minute offsets are insane and I can't fathom why anyone is using them. % zdump -i Europe/Warsaw | head
TZ="Europe/Warsaw"
- - +0124 LMT
1880-01-01 00 +0124 WMT
1915-08-04 23:36 +01 CET
1916-05-01 00 +02 CEST 1
1916-10-01 00 +01 CET
1917-04-16 03 +02 CEST 1
1917-09-17 02 +01 CET
1918-04-15 03 +02 CEST 1
% zdump -i Europe/Kaliningrad | head -20
TZ="Europe/Kaliningrad"
- - +0122 LMT
1893-03-31 23:38 +01 CET
1916-05-01 00 +02 CEST 1
1916-10-01 00 +01 CET
1917-04-16 03 +02 CEST 1
1917-09-17 02 +01 CET
1918-04-15 03 +02 CEST 1
1918-09-16 02 +01 CET
1940-04-01 03 +02 CEST 1
1942-11-02 02 +01 CET
1943-03-29 03 +02 CEST 1
1943-10-04 02 +01 CET
1944-04-03 03 +02 CEST 1
1944-10-02 02 +01 CET
1945-04-02 03 +02 CEST 1
1945-04-10 00 +02 EET
1945-04-29 01 +03 EEST 1
1945-10-31 23 +02 EET
%
"Wanna grab lunch at 1,748,718,000 seconds from the Unix epoch?"
I'm totally going to start doing that now.
https://gist.github.com/timvisee/fcda9bbdff88d45cc9061606b4b...
In a nutshell if you believe anything about time, you're wrong, there is always an exception, and an exception to the exception. And then Doc Brown runs you over with the Delorean.
to string representations!
- system clock drift. Google's instances have accurate timekeeping using atomic clocks in the datacenter, and leap seconds smeared over a day. For accurate duration measurements, this may matter.
- consider how the time information is consumed. For a photo sharing site the best info to keep with each photo is a location, and local date time. Then even if some of this is missing, a New Year's Eve photo will still be close to midnight without considering its timezone or location. I had this case and opted for string representations that wouldn't automatically be adjusted. Converting it to the viewer's local time isn't useful.
My guess is that with the increasing dependency on digital systems for our lives the edge-cases where these rules aren't properly updated cause increased amounts of pain "for no good reason".
In Brazil we recently changed our DST rules, it was around 2017/2018. It caused a lot of confusion. I was working with a system where these changes were really important, so I was aware of this change ahead of time. But there are a lot of systems running without too much human intervention, and they are mostly forgotten until someone notices a problem.
I know you had to limit the length of the post, but time is an interest of mine, so here's a couple more points you may find interesting:
UTC is not an acronym. The story I heard was the English acronym would be "CUT" (the name is "coordinated universal time") and the French complained, the French acronym would be "TUC" and the English-speaking committee members complained, so they settled for something that wasn't pronouncable in either. (FYI, "ISO" isn't an acronym either!)
Leap seconds caused such havoc (especially in data centers) that no further leap seconds will be used. (What will happen in the future is anyone's guess.) But for now, you can rest easy and ignore them.
I have a short list of time (and NTP) related links at <https://wpollock.com/Cts2322.htm#NTP>.
What they did instead was to "smear" it across the day, by adding 1 / 86400 seconds to every second on 31st Dec. 1/86400 seconds is well within the margin of error for NTP, so computers could carry on doing what they do without throwing errors.
Edit: They smeared it from noon before the leap second, to the noon after, i.e 31st Dec 12pm - 1st Jan 12pm.
Instead I mostly use time for durations and for happens-before relationships. I still use Unix flavor timestamps, but if I can I ensure monotonicity (in case of backward jumps) and never trust timestamps from untrusted sources (usually: another node on the network). It often makes more sense to record the time a message was received than trusting the sender.
That said, I am fortunate to not have to deal with complicated happens-before relationships in distributed computing. I recall reading the Spanner paper for the first time and being amazed how they handled time windows.
glorbx•6h ago
But I hate how when I stack my yearly weather charts, every four years either the graph is off by one day so it is 1/366th narrower and the month delimiters don't line up perfectly, or i have to duplicate Feb 28th so there is no discontinuity in the lines. Still not sure how to represent that, but it sure bugs me.