frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

RISC-V Vector Primer

https://github.com/simplex-micro/riscv-vector-primer/blob/main/index.md
2•oxxoxoxooo•3m ago•0 comments

Show HN: Invoxo – Invoicing with automatic EU VAT for cross-border services

2•InvoxoEU•3m ago•0 comments

A Tale of Two Standards, POSIX and Win32 (2005)

https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
2•goranmoomin•7m ago•0 comments

Ask HN: Is the Downfall of SaaS Started?

3•throwaw12•8m ago•0 comments

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
2•senekor•10m ago•0 comments

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
1•myk-e•12m ago•0 comments

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
2•myk-e•15m ago•3 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•16m ago•1 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
2•1vuio0pswjnm7•18m ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
1•1vuio0pswjnm7•19m ago•0 comments

Suno, AI Music, and the Bad Future [video]

https://www.youtube.com/watch?v=U8dcFhF0Dlk
1•askl•21m ago•1 comments

Ask HN: How are researchers using AlphaFold in 2026?

1•jocho12•24m ago•0 comments

Running the "Reflections on Trusting Trust" Compiler

https://spawn-queue.acm.org/doi/10.1145/3786614
1•devooops•29m ago•0 comments

Watermark API – $0.01/image, 10x cheaper than Cloudinary

https://api-production-caa8.up.railway.app/docs
1•lembergs•31m ago•1 comments

Now send your marketing campaigns directly from ChatGPT

https://www.mail-o-mail.com/
1•avallark•34m ago•1 comments

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•46m ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
5•o8vm•48m ago•1 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•49m ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•1h ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•1h ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
2•helloplanets•1h ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•1h ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•1h ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•1h ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•1h ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
2•basilikum•1h ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•1h ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•1h ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
4•throwaw12•1h ago•3 comments

Show HN: MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•1h ago•2 comments
Open in hackernews

Fire destroys S. Korean government's cloud storage system, no backups available

https://koreajoongangdaily.joins.com/news/2025-10-01/national/socialAffairs/NIRS-fire-destroys-governments-cloud-storage-system-no-backups-available/2412936
2080•ksec•4mo ago
https://www.chosun.com/english/national-en/2025/10/02/FPWGFS...

Comments

benoau•4mo ago
> However, due to the system’s large-capacity, low-performance storage structure, no external backups were maintained — meaning all data has been permanently lost.

Yikes. You'd think they would at least have one redundant copy of it all.

> erasing work files saved individually by some 750,000 civil servants

> 30 gigabytes of storage per person

That's 22,500 terabytes, about 50 Backblaze storage pods.

Or even just mirrored locally.

yongjik•4mo ago
It's even worse. According to other articles [1], the total data of "G drive" was 858 TB.

It's almost farcical to calculate, but AWS S3 has pricing of about $0.023/GB/month, which means the South Korean government could have reliable multi-storage backup of the whole data at about $20k/month. Or about $900/month if they opted for "Glacier deep archive" tier ($0.00099/GB/month).

They did have backup of the data ... in the same server room that burned down [2].

[1] https://www.hankyung.com/article/2025100115651

[2] https://www.hani.co.kr/arti/area/area_general/1221873.html

(both in Korean)

paleotrope•4mo ago
That's unfortunate.
poly2it•4mo ago
It's incompetent really.
lukan•4mo ago
No. Fortuna had nothing to do with this, this is called bad planning.
BolexNOLA•4mo ago
Couldn’t even be bothered to do a basic 3-2-1! Wow
sneak•4mo ago
Did you expect government IT in a hierarchical respect-your-superiors-even-when-wrong society to be competent?
BolexNOLA•4mo ago
I mean...I feel you but holy hell dude. Nothing? Boggles the mind.

Edit: my bad backups in the room is something, somehow just forgot about that part

sneak•4mo ago
It wasn’t nothing. They had backups, according to yongjik above.
SamPatt•4mo ago
Do backups in the same room count as backups?
username332211•4mo ago
South Korea isn't some sort of backwards nation and I'm sure it's chaebols share the same culture.

Having had unfortunate encounters with government IT in other countries I can bet that the root cause wasn't the national culture. It was the internal culture of "I want to do the same exact same thing I've always done until the day I retire."

Absent outside pressure, civil services across the word tend advance scientifically - one funeral (or retirement) at a time.

rvba•4mo ago
How does this even make sense business wise for AWS?

Is their cost per unit so low?

Ekaros•4mo ago
When you start to do math, hard drive are cheap when you go for capacity and not performance.

0.00099*1000 is 0.99. So about 12$ a year. Now extrapolate something like 5 year period or 10 year period. And you get to 60 to 120$ for TB. Even at 3 to 5x redundancy those numbers start to add up.

vbezhenar•4mo ago
S3 does not spend 3x drives to provide redundancy. Probably 20% more drives or something like that. They split data to chunks and use erasure coding to store them in multiple drives with little overhead.
hapanin•4mo ago
wait, can you elaborate on how this works?
vbezhenar•4mo ago
You have 100 bytes file. You split it into 10 chunks (data shards) and add 11-th chunk (parity shard) as XOR of all 10 chunks. Now you store every chunk on separate drive. So you have 100 bytes and you spent 110 bytes to store them all. Now you can survive one drive death, because you can recompute any missing chunk as XOR of all chunks.

That's very primitive explanation, but should be easy to understand.

In reality S3 uses different algorithm (probably Reed-Solomon codes) and some undisclosed number of shards (probably different for different storage classes). Some say that they use 5 of 9 (so 5 data shards + 4 parity shards which makes for 80% overhead), but I don't think it's official information.

gundmc•4mo ago
AFAIK geo-replication between regions _does_ replicate the entire dataset. It sounds like you're describing RAID configurations, which are common ways to provide redundancy and increased performance within a given disk array. They definitely do that too, but within a zone
alexjurkiewicz•4mo ago
S3 uses 5-of-9 erasure coding[1]. That's roughly 2x overhead.

[1] https://bigdatastream.substack.com/p/how-aws-s3-scales-with-...

burnt-resistor•4mo ago
And S3 RRS and Glacier do even less.
sudo_and_pray•4mo ago
This is just the storage cost. That is they will keep your data on their servers, nothing more.

Now if you want to do something with the data, that's where you need to hold your wallet. Either you get their compute ($$$ for Amazon) or you send it to your data centre (egress means $$$ for Amazon).

npteljes•4mo ago
They charge little for storage and upload, but download, so getting your data back, is pricey.
Imustaskforhelp•4mo ago
Mate, this is better than an entire nation's data getting burned.

Yes its pricey but possible.

Now its literally impossible.

I think that AWS Glacier at that scale should be the thing preferred as they had their own in house data too but they still should've wanted an external backup and they are literally by the govt. so they of all people shouldn't worry about prices.

Have secure encrypted backups in aws and other possibilities too and try to create a system depending on how important the treat model is in the sense that absolutely filter out THE MOST important stuff out of those databases but that would require them to label it which I suppose would make them gather even more attention to somehow exfiltrate / send them to things like north korea/china so its definitely a mixed bag.

my question as I said multiple times, why didn't they build a backup in south korea only and used some other datacentre in south korea only as the backup to not have to worry about encryption thing but I don't really know and imo it would make more sense for them to actually have a backup in aws and not worry about encryption personally since I find the tangents of breaking encryption a bit unreasonable since if that's the case, then all bets are off and the servers would get hacked too and that was the point of phrack with the advanced persistent threat and so much more...

are we all forgetting that intel has a proprietory os minix running in the most privileged state which can even take java bytecode through net and execute it and its all proprietory. That is a bigger security threat model personally to me if they indeed are using that which I suppose they might be using.

npteljes•4mo ago
I just responded to "How does this even make sense business wise for AWS?"
lucb1e•4mo ago
It's expensive if you calculate what it would cost for a third party to compete with. Or see e.g. this graph from a recent HN submission: https://si.inc/posts/the-heap/#the-cost-breakdown-cloud-alte...
mastax•4mo ago
I made an 840TB storage server last month for $15,000.
ycombinatrix•4mo ago
840TB before or after configuring RAID?
mastax•3mo ago
840TB raw unformatted.
maxlin•4mo ago
I have almost 10% of that in my closet RAID5'd with large part of it backing up constantly to Backblaze for 10$/month, running on 10 year old hardware, with basically only the hard drives having any value ... Used a case made of cardboard till I wanted to improve the cooling, and got a used Fractal Design case for 20€.

_Only_ the kind of combination of incompetence and bad politics here can lead to the kind of % of how much data has been lost here, given the policy was to only save stuff on that "G-drive" and avoid local copies. The "G-drive" they intentionally did not back up because they couldn't figure out a solution to at least store a backup across the street ...

rasz•4mo ago
>AWS S3 has pricing of about $0.023/GB/month, which means ... about $20k/month

or outright buying hardware capable of storing 850TB for the same $20K one time payment. Gives you some perspective on how overpriced AWS is.

swarnie•4mo ago
Where are you getting 850TB of enterprise storage for $20k?

I had 500TB of object storage priced last year and it came out closer to $300k

kachapopopow•4mo ago
136tb for $3k (used gen 2 epyc hardware and refurb <1 hour 16tb hdd's) they're zero risk after firmware validation and 1 full drive read and write.
HHad3•4mo ago
That's including the enterprise premium for software, hardware support, and licenses. Building this in-house using open source software (e.g. Ceph) on OEM hardware will be cheaper by an order of magnitude.

You of course need people to maintain it -- the $300k turnkey solution might be the better option depending on current staff.

mort96•4mo ago
Priced out by whom? What kind of object storage? Were you looking at the price of drives, or the price of a complete solution delivered by some company?
jeroenhd•4mo ago
AWS? Linus Tech Tips has run multiple petabyte servers in their server closet just for sponsor money and for the cool of it. No need to outsource your national infrastructure to foreign governments, a moderate (in government terms) investment in a few racks across the country could've replicated everything for maybe half a year's worth of Amazon subscription fees.
TheRoque•4mo ago
Exactly, everyone here on hackernews is talking about Azure/AWS/GCP as if it was the only correct way to store data. Americans are too self centered, it's quite crazy.
mrbadguy•4mo ago
Yeah the comments here are slightly surreal; the issue was that they didn’t have an off-site backup at all, not that it wasn’t on AWS or whatever.
axus•4mo ago
But then they will depend on the security people at both sides to agree on the WAN configuration. Easier to let everything burn in a fire and rebuild from scratch.
baobabKoodaa•4mo ago
You're assuming average worker utilized the full 30G of storage. More likely average was at like 0.3G.
jeroenhd•4mo ago
On the other hand: backups should also include a version history of some kind, or you'd be vulnerable to ransomware.
PeterStuer•4mo ago
I'm sure they had dozens of process heavy cybersecurity committees producing hundreds if not thousands of powerpoints and word documents outlining procedures and best practices over the last decade.

There is this weird divide between the certified class of non-technical consultants and actual overworked and pushed to corner cut techs.

zaphar•4mo ago
Ironically many of those documents for procedures probably lived on that drive...
ksec•4mo ago
I dont know why but cant stop laughing. And the great thing is that they will get paid again to write the same thing.
comprev•4mo ago
You jest, but I once had a client who's IaC provisioning code was - you guessed it - stored on the very infrastructure which got destroyed.
__turbobrew__•4mo ago
If you are one of the big boys (FAANG and other large companies who run physical infra) you will have this problem as well. The infra systems run and replace themselves and if something fundamental breaks (for example, your deployment system requires DNS, but your DNS servers are broken, but you cannot deploy to fix them as the deploy service requires DNS).

From what I have seen a lot of time the playbooks to fix these issues are just rawdogging files using rsync manually. Ideally you deploy your infrastructure in cells where rollouts proceed cell by cell so you can catch issues sooner and also implement failover to bootstrap broken cells (in my DNS example, client could talk to DNS servers in the closest non-broken cell using BGP based routing). It is hard to test, and there are some global services (like that big Google outage a few months ago was due to the global auth service being down).

perihelions•4mo ago
Here's a 2024 incident:

> "The outage also hit servers that host procedures meant to overcome such an outage... Company officials had no paper copies of backup procedures, one of the people added, leaving them unable to respond until power was restored."

https://www.reuters.com/technology/space/power-failed-spacex...

toast0•4mo ago
The data seems secure. No cyberthreat actors can access it now. Effective access control: check.
miniBill•4mo ago
Ironically, see the phrack article someone linked above
senkora•4mo ago
I like the definition of security = confidentiality + integrity + availability.

So confidentiality was maintained but integrity and availability were not.

Titan2189•4mo ago
Surely there must be something that's missing in translation? This feels like it simply can't be right.
mrbluecoat•4mo ago
I agree. No automated fire suppression system for critical infrastructure with no backup?
fredoralive•4mo ago
That may not be a perfect answer. One issue with fire suppression systems and spinning rust drives is that the pressure change etc. from the system can also ‘suppress’ the glass platters in drives as well.
privatelypublic•4mo ago
I'd be interested in if you can even use dry fire suppression on the 5th floor of a building.
magicalhippo•4mo ago
Reminds me of the classic video[1] showing how shouting at the harddrives make them go slower.

[1]: https://www.youtube.com/watch?v=tDacjrSCeq4

perlgeek•4mo ago
That's why the top-security DCs that my employer operates have large quantities of Nitrogen stored, and use that slightly lower the O2 saturation of the air in the case of fire.

Yes, it's fucking expensive, that's one of the reason you pay more for a VM (or colocation) than at Hetzner or OVH. But I'm also pretty confident that single fire wouldn't destroy all hard drives in that IT space.

rini17•4mo ago
Battery fire is impossible to suppress.
oceansky•4mo ago
Much harder, but not impossible.
delfinom•4mo ago
Lithium ion batteries go into thermal runaway. The flame can be somewhat suppressed by displacing oxygen and/or spraying shit on it to prevent the burning of material. But it's still going thermalnuclear and putting out incredibly hot gasses. The only way to suppress it is by dunking the batteries in water to sap the energy out of them.
perlgeek•4mo ago
That's why in high-quality DCs, battery backup is in a separate room with good fire isolation from the IT space.

Yes, the servers still have some small batteries on their mainboards etc, but it's not too bad.

BonoboIO•4mo ago
At first you think what an incompetent government would do such things, but even OVH pretty much did the same a few years ago. Destroyed some companies in the progress. A wooden floor in a datacenter with backups in the same building …

https://www.datacenterdynamics.com/en/news/ovhcloud-fire-rep...

layer8•4mo ago
It’s accurate: https://www.chosun.com/english/national-en/2025/10/02/FPWGFS...
nextworddev•4mo ago
Because it was arson, not an accident
pengaru•4mo ago
Arson? Sounds increasingly like espionage.
BrandoElFollito•4mo ago
> all documents be stored exclusively on G-Drive

Does G-Drive mean Google Drive, or "the drive you see as G:"?

If this is Google Drive, what they had locally were just pointers (for native Google Drive docs), or synchronized documents.

If this means the letter a network disk storage system was mapped to, this is a weird way of presenting the problem (I am typing on the black keyboard and the wooden table, so that you know)

lysace•4mo ago
The name G-Drive is said to be derived from the word ‘government’.
indy•4mo ago
It's now derived from the word 'gone'
ncr100•4mo ago
'Gone' up in smoke
kristianc•4mo ago
It's an X-Drive now
prmph•4mo ago
G-drive was simply the name of the storage system
bryanhogan•4mo ago
Saw a few days ago that the application site for the GKS, the most important scholarship for international students in Korea, went offline for multiple days, surprising to hear that they really lost all of the data though. Great opportunity to build a better website now?

But yeah it's a big problem in Korea right now, lots of important information just vanished, many are talking about it.

Zacharias030•4mo ago
Must have been a program without much trickle down into gov tech
m3047•4mo ago
Mindblowing. Took a walk. All I can say is that if business continues "as usual" and the economy and public services continue largely unaffected then either there were local copies of critical documents, or you can fire a lot of those workers; either one of those ways the "stress test" was a success.
layer8•4mo ago
“Final reports and official records submitted to the government are also stored in OnNara, so this is not a total loss”.
dghlsakjg•4mo ago
How do you come to the conclusion that because things work without certain documents that you can start laying off workers?
RaptorJ•4mo ago
Surely having human-resource backups will also help with disaster recovery
MiddleEndian•4mo ago
>or you can fire a lot of those workers

Sometimes things can seem to run smoothly for years when neglected... until they suddenly no longer run smoothly!

npteljes•4mo ago
Long term damage, and risk are two things that don't show up with a test like this. Also, often why things go forward is just momentum, built from the past.
danparsonson•4mo ago
Yeah you can do the same with your car too - just gradually remove parts and see what's really necessary. Seatbelts, horn, rear doors? Gone. Think of the efficiency!
rotis•4mo ago
The fire started on 26th September and news about it reached HN only now. I think this is telling how disruptive for South Korea daily life this accident really was.
aio2•4mo ago
Funny, because the same thing happened in Nepal a few weeks ago. Protestors/rioters burned some government buildings, along with the tech infrastructure within them, so now almost all electronic data is gone.
dottjt•4mo ago
Would this have been any different if these documents were stored non-electronically though? I understand that the whole point of electronic data is that it can be backed up, but if the alternative were simply an analog system then it would have fared no better.
seunosewa•4mo ago
It would have been better if storage was distributed.
Muromec•4mo ago
Paper records are usually distributed both by agency and by locality.
dikei•4mo ago
For paper documents, you'd make at least a few copies for storage at the source, and then every receiver will get his/her own notarized copies.

Electronically, everyone just receives a link to read the document.

rvba•4mo ago
Happened in Bladerunner too
senordevnyc•4mo ago
And Fight Club
serioussecurity•4mo ago
Anti authoritarian patriots?
perihelions•4mo ago
One source,

https://www.nytimes.com/2025/09/13/world/asia/nepal-unrest-a... ("Many of the nation’s public records were destroyed in the arson strikes, complicating efforts to provide basic health care")

hackernewds•4mo ago
Not sure where you got that info. only physical documents were burned (intentionally by the incumbents you could argue) however the digital backups were untouched
727564797069706•4mo ago
Meanwhile, Estonia has a "data embassy" in Luxembourg: https://e-estonia.com/solutions/e-governance/data-embassy/

TL;DR: Estonia operates a Tier 4 (highest security) data center in Luxembourg with diplomatic immunity. Can actively run critical government services in real-time, not just backups.

lostmsu•4mo ago
This comment is in some way more interesting than the topic of the article.
_joel•4mo ago
Totally, backup disasters are a regular occurence (maybe not to the degree of negligence) but the Estonia DR is wild.
lucb1e•4mo ago
Definitely. Especially when considering that there were 95 other systems in this datacentre which do have backups and

> The actual number of users is about 17% of all central government officials

Far from all, and they're not sure what's recoverable yet ("“It’s difficult to determine exactly what data has been lost.”")

Which is not to say that it's not big news ("the damage to small business owners who have entered amounts to 12.6 billion Korean won.” The ‘National Happiness Card,’ used for paying childcare fees, etc., is still ‘non-functional.’"), but to put it a bit in perspective and not just "all was lost" as the original submission basically stated

Quotes from https://www.chosun.com/english/national-en/2025/10/02/FPWGFS... as linked by u/layer8 elsewhere in this thread

hkt•4mo ago
That is absolutely delightful. Estonia is just _good_ at this stuff. Admirable.
lukeqsee•4mo ago
This is because everything is in digital form. Essentially all government systems are digital-first, and for the citizen, often digital-only. If the data is lost, there may be no paper records to restore everything from land registry, business registry (operating agreements, ownership records), etc.

Without an out-of-country backup, a reversion to previous statuses means the country is lost (Estonia has been occupied a lot). With it, much of the government can continue to function, as an expat government until freedom and independence is restored.

chpatrick•4mo ago
"secured against cyberattacks or crisis situations with KSI Blockchain technology"

hmmmm

tamimio•4mo ago
> Estonia follows the “once-only” principle: citizens provide their data just once, and government agencies re-use it securely. The next step is proactive services—where the government initiates service delivery based on existing data, without waiting for a citizen’s request.

I wish the same concept was in Canada as well. You absolutely have to resubmit all your information every time you do a request. On top of that, federal government agencies still mail each other the information, so what usually can be done in 1 day takes a whole month to process, assuming the mail post isn't on strike (spoiler: they are now).

I think Canada is one of the worst countries in efficiency and useless bureaucracy among 1st world countries.

__turbobrew__•4mo ago
I wanted to update some paperwork to add my wife as a beneficiary to some accounts. I go to the bank in person and they tell me “call this number, they can add the beneficiary”. I call the number and wait on hold for 30 minutes and then the agent tells me that they will send me an email to update the beneficiary. I get an email over 24 hours later with a PDF THAT I HAVE TO PRINT OUT AND SIGN and then scan and send back to the email. I do that, but then I get another email back saying that there is another form I have to print and sign.

This is the state of banking in Canada. God forbid they just put a text box on the banking web app where I can put in my beneficiary.

Not to mention our entire health care system still runs on fax!

It blows my mind that we have some of the smartest and well educated people in the world with some of the highest gdp per capita in the world and we cannot figure out how to get rid of paper documents. You should be issued a federal digital ID at birth which is attested through a chain of trust back to the federal government. Everything related to the government should be tied back to that ID.

koakuma-chan•4mo ago
I used my bank as an Sign In partner for IRCC, and I lost my IRCC account after my debit card expired and I got a new one.
layer8•4mo ago
Some more details in this article: https://www.chosun.com/english/national-en/2025/10/02/FPWGFS...
dang•4mo ago
Thanks! we've added that link to the toptext as well
lucb1e•4mo ago
> The stored data amounts to 858TB (terabytes), equivalent to 449.5 billion A4 sheets.

This attempt at putting it in perspective makes me wonder what would put it in perspective. "100M sets of harry potter novels" would be one step in the right direction, but nobody can imagine 100M of anything either. Something like "a million movies" wouldn't work because they are very different from text media in terms of how much information is in one, even if the bulk of the data is likely media. It's an interesting problem even if this article's attempt is so bad it's almost funny

Good article otherwise though, indeed a lot more detail than the OP. It should probably replace the submission. Edit: dang was 1 minute faster than me :)

sixothree•4mo ago
"equivalent to 50 hard drives" ?
lucb1e•4mo ago
I don't think many people have 16TB of storage on the hard drives they're familiar with but I take your point nonetheless! Simple solution. I was apparently too focused on the information factor and didn't think of just saying how many of some storage medium it is equivalent to, which makes a lot of sense indeed
mort96•4mo ago
How about "equivalent to 858 1TB hard drives"?
jopsen•4mo ago
> The Interior Ministry explained that while most systems at the Daejeon data center are backed up daily to separate equipment within the same center and to a physically remote backup facility, the G-Drive’s structure did not allow for external backups.

This is why I don't really want to run my own cloud :)

Actually testing the backups is boring.

That said, ones the flames are out, they might actually be able to recover some of it.

whartung•4mo ago
Testing backups is boring. If you want exciting, test restores!
Imustaskforhelp•4mo ago
Hm, care to elaborate. I kinda liked this idea even though I know that it shouldn't make much sense but still lol, would this have any benefits over testing backups other than the excitement lol
procaryote•4mo ago
The joke is that if you don't test backups you'll end up seeing if they work once you're restoring after a disaster, which is exciting
fer•4mo ago
I'm stealing this!
MangoCoffee•4mo ago
what's the point of a storage system with no back up?
WJW•4mo ago
It works fine as long as it doesn't break, and it's cheaper to buy than an equivalently sized system that does have back ups.
lucb1e•4mo ago
Isn't that self-evident? Do you have two microwaves from different batches, regularly tested, solely for the eventuality that one breaks? Systems work fine until some (unlikely) risk manifests...

Idk if this sounds like I'm against backups, I'm not, I'm just surprised by the question

MangoCoffee•4mo ago
It's hard to believe this happened. South Korea has tech giants like Samsung, and yet this is how the government runs? Is the US government any better?
userbinator•4mo ago
The US government still relies heavily on physical records.
nebula8804•4mo ago
Didn't Elon shut that down?

[0]: https://www.cnbc.com/2025/02/13/company-ripped-by-elon-musk-...

AshamedCaptain•4mo ago
Why is there a "still" in there?
ashirviskas•4mo ago
South Korean IT seemed to be stuck in 2007 just not too long ago, would be surprised if it has changed much in the last few years. Do the websites still require you to use internet explorer?
r_lee•4mo ago
Software and information technology in Korea just sucks.

buttons are jpegs/gifs, everything is on Java EE and on vulnerable old webservers etc... A lot of government stuff supports only Internet Explorer even though it's long dead

creakingstairs•4mo ago
Remember Log4j vulnerability? A lot of the Korea governmental sites weren't affected because the Java version was too old :)

Don't even get me started on ActiveX.

carrychains•4mo ago
The first thing that comes to mind when I think of the South Korean government is the storied tradition of physical confrontation in their parliament along with more than a few viral videos of brawls and such over the years. It used to be better in the US, but with the intensity of discord in our government lately, I don't think anyone really knows anymore.
eagleislandsong•4mo ago
> The first thing that comes to mind when I think of the South Korean government is the storied tradition of physical confrontation in their parliament along with more than a few viral videos of brawls and such over the years

You're thinking of Taiwan, not South Korea.

creakingstairs•4mo ago
No South Korea has the same thing. It doesn't happen yearly but has happened quite a bit. We lovingly call it parliament siege raid.

https://m.blog.naver.com/gard7251/221339784832 (a random blog with gifs)

zaptheimpaler•4mo ago
If only our politicians were young and agile enough to get into brawls.. their speed seems to be more sleeping on the job while democracy crumbles.
logicchains•4mo ago
Samsung's software is generally terrible; they're decent at hardware, not software.
3eb7988a1663•4mo ago
I was going to say, Samsung anything immediately makes me assume the software is awful. With a dose of zero privacy, cloud enabled door-knob or something.
greenavocado•4mo ago
The hardware is highly engineered to fail at a specific time window after the warranty is over.
moduspol•4mo ago
Our incompetence in the US is much more distributed. It wouldn't surprise me if the same kind of data isn't backed up, but at least it's dozens of separate federal agencies not-backing up their data in different physical places.
foofoo12•4mo ago
Well, Elon has a recent copy of everything at least.
jml78•4mo ago
Yes. The US government requires offsite backups .

They also require routine testing distaster recovery plans.

I participated in so many different programs over the years with those tests.

Tests that would roll over to facilities across the country

aorloff•4mo ago
Theoretically, they still have the primary copies (on each individual person's "cloud-enabled" device).
cthalupa•4mo ago
> The Ministry of the Interior and Safety also issued guidelines to each ministry stating, “All work materials should not be stored on office PCs but should be stored on the G-Drive.”

They very well might have only been saving to this storage system. It was probably mapped as a drive or shared folder on the PC.

crazygringo•4mo ago
Do they? It's not clear if this was two-way sync or access on-demand.

Like, I use Google Drive for Desktop but it only downloads the files I access. If I don't touch a file for a few days it's removed from my local cache.

jn78•4mo ago
https://phrack.org/issues/72/7_md#article
msbhvn•4mo ago
Woah, read the timeline at the top of this. The fire happened the very day the government ordered onsite inspection was supposed to start due to Chinese/NK hacking.
yieldcrv•4mo ago
So, someone figured out how to do backups
southernplaces7•4mo ago
They certainly will after this.
AnimalMuppet•4mo ago
Yeah, this whole thing smells.

Who has the incentive to do this, though? China/North Korea? Or someone in South Korea trying to cover up how bad they messed up? Does adding this additional mess on top mean they looked like they messed up less? (And for that to be true, how horrifically bad does the hack have to be?)

mattmaroon•4mo ago
It might be different “they”s. Putting on my tinfoil hat, whoever was going to be in hot water over the hack burns it down and now the blame shifts from them to whoever manages G-drive and don’t have a backup plan.

Not saying I believe this (or even know enough to have an opinion), but it’s always important to not anthropomorphize a large organization. The government isn’t one person (even in totalitarian societies) but an organization that contains large numbers of people who may all have their own motivations.

keepamovin•4mo ago
If there was shady behavior, I doubt it’s about a cyber hack. More likely probably the current administration covering their tracks after their purges.

Alternate hypothesis: cloud storage provided doing the hard sell. Hahaha :)

mattmaroon•4mo ago
“It’d be a real shame if something happened to your data center…”
conductr•4mo ago
> whoever was going to be in hot water over the hack burns it down and now the blame shifts from them to whoever manages G-drive and don’t have a backup plan.

LG is SK firm and manufacturer of hacked hardware and also the batteries that caught fire. Not sure it’s a solid theory just something I took note of while thinking the same

mattmaroon•4mo ago
Interesting but same concept applies to organizations.
jftr•4mo ago
Phrack's timeline may read like it, but it wasn't an onsite inspection due to hacking, but a scheduled maintenance to replace the overdue UPS, hence battery-touching involved. Even the image they linked just says "scheduled maintenance."
dmix•4mo ago
Supply chain interceptions can happen for batteries and other electronics being used.
danudey•4mo ago
So right after the investigation was announced, they suddenly scheduled a UPS battery replacement which happened to start a fire big enough to destroy the entire data centre and all data or evidence?

Yeah, that's way less suspicious, thanks for clearing that up.

naruhodo•4mo ago
My mind initially went to a government cover-up, but then:

> 27th of September 2025, The fire is believed to have been caused while replacing Lithium-ion batteries. The batteries were manufactured by LG, the parent company of LG Uplus (the one that got hacked by the APT).

Could the battery firmware have been sabotaged by the hacker to start the fire?

PapaPalpatine•4mo ago
That's exactly where my mind went!
lazystar•4mo ago
this was a plot in a Mr. Robot episode, heh. Life imitating art?
asimovDev•4mo ago
was there a battery hacking episode? I can't remember the show anymore, might be due in for a rewatch it seems.
dijit•4mo ago
iirc that’s how they destroyed “steel mountain”.
voidUpdate•4mo ago
They hacked the firmware of the UPSs inside e-corp to destroy all paper records. The steel mountain hack was messing with the climate controls using a raspi to destroy tape archives
KaiserPro•4mo ago
It could have.

But

replacing a UPS is usually done to right time pressures. the problem is, you can rarely de-energise UPS batteries before replacing them, you just need to be really careful when you do it.

Depending on the UPS, Bus bars can be a mother fucker to get on, and of they touch energised they tend to weld together.

With lead acid, its pretty bad (think molten metal and lots of acidic, toxic and explosive gas, with lithium, its just fire. lots of fire that is really really hard to put out.

gtech1•4mo ago
Don't you have to put UPS's in bypass mode precisely for this reason while doing maintenance on them ?
KaiserPro•4mo ago
Yeah, but the problem is that the batteries are still full of juice.

Obviously for rack based UPSs you'd "just" take out the UPS, or battery drawer, and replace somewhere more safe, or better yet, swap out the entire thing.

For more centralised UPSs that gets more difficult. The shitty old large UPSs were a bunch of cells bolted to a bus bar, and then onto the switchgear/concentraitor.

for Lithium, I would hope its proper electrical connectors, but you can never really tell.

positron26•4mo ago
UPS, check. Any kind of reasonable fire extinguisher, nah.

A Kakao datacenter fire took the de-facto national chat app offline not too many years ago. Imagine operating a service that was nearly ubiquitous in the state of California and not being able to survive one datacenter outage.

After reading the Phrack article, I don't know what to suspect, the typical IT disaster preparedness or the operators turning off the fire suppression main and ordering anyone in the room to evacuate to give a little UPS fire enough time to start going cabinet to cabinet.

jychang•4mo ago
If the theory "north korea hacked the UPS batteries to blow" is true, though, then it makes more sense why fire suppression wasn't able to kick in on time.
ruined•4mo ago
look at the timeline again. this is the second fire.
positron26•4mo ago
technically seems more accurate to say controlled burn
niffydroid•4mo ago
https://www.ispreview.co.uk/index.php/2025/09/openreach-give...

Recently in the UK a major communication company had issues with batteries

trhway•4mo ago
Such coincidences do happen. 20 years ago the plane which was carrying all the top brass of the Russian Black Sea Fleet as well as the Fleet’s accounting documentation for inspection to Moscow burst in flames and fell to the ground while trying to get airborne. Being loaded with fuel it immediately became one large infernal fireball. By some miracle no top brass suffered even minor burn/injury while all the accounting documentation burned completely.
southernplaces7•4mo ago
One hell of an act of God that... Believable though, given the consistent transparency and low corruption in the Russian government's administration.
IAmBroom•4mo ago
Golf clap, lasting several minutes.

Bravo, old boy.

madaxe_again•4mo ago
Quite a few of those top brass years later shot themselves in the head several times before jumping from a window.

Anyway, shoe production has never been better.

postsantum•4mo ago
"NK hackers" reminds me "my homework was eaten by a dog". It's always NK hackers that steal data/crypto and there is absolutely no possibility to do something with it or restore the data, because you know they transfer the info on a hard disk and they shoot it with an AD! Like that general!

How do we know it's NK? Because there are comments in north-korean language, duh! Why are you asking, are you russian bot or smt??

bboygravity•4mo ago
The good news is: there are still off-site backups.

The bad news is: they're in North Korea.

IAmBroom•4mo ago
"Your Holiness! I have terrible news! Jesus has returned!"

"But that's a blessed event? How could that be terrible?"

"He appeared in Salt Lake City."

j3th9n•4mo ago
Figures.
jddj•4mo ago
Silver lining: it's likely that technically there is a backup (section 1.3).

It's just in NK or china.

Yikes.

tibbon•4mo ago
I don't backup my phone. The NSA does it for me!
juancb•4mo ago
The recovery process and customer service around that is near impossible
azinman2•4mo ago
In the same respect /dev/null can backup mine. Good luck getting data back.
edm0nd•4mo ago
The only part of our government that listens.
63stack•4mo ago
This is the first time I see this site, who/what is phrack? A hacker group?
fiatpandas•4mo ago
It’s a zine. Been around since the 80’s. Hackers / security industry types read and publish to it.
godelski•4mo ago
For more context, the name derives from "phone hacking" or phreacking. You got your legends like Captain Crunch and many of you big tech players were into this stuff when they were younger, such as Woz

This was also often tied to a big counter culture movement. Which one interesting thing is that many of those people now define the culture. I guess not too unlike how many hippies changed when they grew up

quesera•4mo ago
> the name derives from "phone hacking" or phreacking

Etymology quibble: There is no 'c' in phreaking. Phrack is just a portmanteau of "phreak" and "hack". :)

godelski•4mo ago
Haha thanks. I constantly make that mistake.

  > Phrack is just a portmanteau of "phreak" and "hack". :)
Well... I think that explanation also explains this common mistake :):
AnimalMuppet•4mo ago
https://en.wikipedia.org/wiki/Phrack
Imustaskforhelp•4mo ago
Not sure why people downvoted you as I actually read the wikipedia and learnt a lot about phrack and how their name is sort of inspired by "phreaking,anarchy and cracking" and I think thus the name ph-ra-ck.
Cthulhu_•4mo ago
It looks delightful, but definitely for and by a specific subculture.
NKosmatos•4mo ago
Thanks for this, it gives a lot of extra info and content compared to the original article.
neilv•4mo ago
When you see a chronology like that, you don't keep trying to speak truth to power.

You delete your data, trash your gear, and hop on a bus, to start over in some other city, in a different line of work.

AnimalMuppet•4mo ago
s/city/country/
maldonad0•4mo ago
And with no technology! Perhaps become some kind of ascetic monk.
baobun•4mo ago
> 27th of September 2025, The fire is believed to have been caused while replacing Lithium-ion batteries. The batteries were manufactured by LG, the parent company of LG Uplus (the one that got hacked by the APT).

Compromised batteries or battery controllers?

lwhi•4mo ago
Witness A said, “It appears that the fire started when a spark flew during the process of replacing the uninterruptible power supply,” and added, “Firefighters are currently out there putting out the fire. I hope that this does not lead to any disruption to the national intelligence network, including the government’s 24 channel.”[1]

[1] https://mbiz.heraldcorp.com/article/10584693

rawgabbit•4mo ago
How large is this UPS that a fire can bring down all 96 servers?

This story is really unbelievable.

sleepybrett•4mo ago
depends on how many batteries were in the facility, if one goes up chances are the rest go too. Can halon systems not put out lithium fires?
RandomBacon•4mo ago
I'm not sure about South Korea, but in the U.S., halon started to be phased out in 1994 due to its ozone-depleting characteristics. I believe new facilities use CO2.

I'm guessing lithium-ion batteries were not a factor years ago when those decisions were made.

waste_monk•4mo ago
>Can halon systems not put out lithium fires?

As the other commenter said, Halon hasn't been a thing for a fair while, but inert gas fire suppression systems in general are still popular.

I would expect it wouldn't be sufficient for a lithium ion battery fire - you'd temporarily displace the oxygen, sure, but the conditions for fire would still exist - as soon as enough nitrogen (or whatever suppressant gas is in use) dissipates, it'd start back up again.

Also as I understand thermal runaway is self-sustaining, since the lithium ion batteries have a limited capacity to provide their own oxygen (something to do with the cathode breaking down?), so it might continue burning even while the area is mostly flooded with inert gas.

I believe it would be similar to an EV car fire, that is, you'd have to flood the area with water and wait for it to cool down enough that thermal runaway stops. Maybe they can do better these days with encapsulating agents but I'd still expect the rack housing the UPS to be a write-off.

oasisbob•4mo ago
> I would expect it wouldn't be sufficient for a lithium ion battery fire - you'd temporarily displace the oxygen

(Edit: sorry, in hindsight it's obvious the comment I'm replying to was referring to inert gas systems, and not halogenated systems)

Halon and friends don't work through an oxygen displacement mechanism, their fire suppression effects are primarily due to how the halogen moieties interfere with the decomposition of other substances in the flame. IIRC, A key mechanism is the formation of hydrogen(!) from hydrogen radicals.

Apparently if the calibration is correct, halon can de deployed in a space to suppress a fire without posing as asphyxiation risk.

A good review is here: https://www.nist.gov/system/files/documents/el/fire_research...

rini17•4mo ago
They were not tested enough for that. From chemical POV the fluorine in halon can even exothermically react with lithium, like teflon can with aluminium. But all depends on circumstances, it needs high temperatures and the lithium concentration in batteries is low.
rcxdude•4mo ago
Lithium ion batteries provide their own oxidiser, removing oxygen won't put them out (though it will probably help stop the fire from spreading). The only thing that kinda helps is removing the heat (with cold C02 or water, the latter not great for an electrical fire and the former only good for pretty small fires), but that's only a temporary fix usually. Ultimately a lithium battery fire has got to burn itself out.
davkan•4mo ago
I’m no expert but traditional lead acid battery UPS are typically at the bottom of the rack due to weight and concern about leakage. Wouldn’t surprise me if li-ion UPS go at the bottom as well. In that case if uncontrolled it seems pretty easy to torch an entire rack.

96 servers isn’t that many, probably less than 10 racks and given the state of the backups it would track that they didn’t spring for halon.

ivape•4mo ago
This sounds like a real whodunit.
FergusArgyll•4mo ago
Well, I think we know "who"dunnit it's more of a how-dunnit & are-they-still-in-dunnit
Imustaskforhelp•4mo ago
Ohh side note but this was the journalist group which was blocked by proton

The timing as well is very suspicious and I think that there can be a lot of discussion about this

Right now, I am wondering about the name most tbh which might seem silly but "APT down - The North Korean files"

It seems that APT means in this case advanced persistent threat but I am not sure what they mean by Apt Down, like the fact that it got shut down by their journalism or-? I am sorry if this may seem naive and on a serious note this raises so many questions...

taneq•4mo ago
For a moment there I was wondering if “apt down” was a typo and you meant “ifdown”. ;)
exogenousdata•4mo ago
“APT Down” is likely a reference to a popular Korean drinking game.

https://www.thetakeout.com/1789352/korea-apt-drinking-game-r...

Shank•4mo ago
Though this is far from the most important points of this article, why do even the article’s authors defend Proton after having their accounts suspended, and after having seemingly a Korean intelligence official warn them that they weren’t secure? Even if they’re perfectly secure they clearly do not have the moral compass people believe they have.
Levitating•4mo ago
What other service would you use?
1oooqooq•4mo ago
not use email in this day and age?
Levitating•4mo ago
Okay, how would you approach companies for responsible discolusure.
1oooqooq•4mo ago
threat email as push notification. "here's a link"
FergusArgyll•4mo ago
> KIM is heavily working on ToyBox for Android.

2 HN front page articles in 1!

georgethedrab•4mo ago
thanks for the info, canceling proton rn
1oooqooq•4mo ago
proton is alternative to gmail still. you replace nsa and ad networks with nsa only. it's a win.
8cvor6j844qw_d6•4mo ago
Currently, still on Proton for its aliasing service but keeping my eye out for a suitable replacement candidate.

Thankfully I made the right choice to stay on Bitwarden instead of moving to Proton Pass.

nicman23•4mo ago
holy shit lol. this is naked gun level incompetence
dang•4mo ago
[stub for offtopicness]
mouse_•4mo ago
We will learn nothing
pr337h4m•4mo ago
Now imagine they had a CBDC.
glitchc•4mo ago
I thought most liberal governments gave up on those.
blueflow•4mo ago
[flagged]
dvh•4mo ago
Technically the data is still in the cloud
pestaa•4mo ago
I've been putting off a cloud to cloud migration, but apparently it can be done in hours?
zigzag312•4mo ago
You can use accelerants to speed up migration
VeninVidiaVicii•4mo ago
The egress cost is gonna be a doozie though!
datadrivenangel•4mo ago
one of many fires to fight in such a fast scenario
zigzag312•4mo ago
The cloud has materialized
anonu•4mo ago
Lossy upload though
_zoltan_•4mo ago
Lossy download, no?
layer8•4mo ago
No information is lost: https://en.wikipedia.org/wiki/No-hiding_theorem#:~:text=info...
pjc50•4mo ago
Cloud of smoke, amirite.
higginsniggins•4mo ago
Unfortunately, the algorithm to unhash it is written in smoke signals
gnfargbl•4mo ago
https://mastodon.social/@nixCraft/113524310004145896
cs702•4mo ago
Brilliant.

This deserves its own HN submission. I submitted it but it was flagged due to the title.

Thank you for sharing it on HN.

kyrra•4mo ago
Copy/paste:

7 things all kids need to hear

1 I love you

2 I'm proud of you

3 I'm sorry

4 I forgive you

5 I'm listening

6 RAID is not backup. Make offsite backups. Verify backup. Find out restore time. Otherwise, you got what we call Schrödinger backup

7 You've got what it takes

zer00eyz•4mo ago
This is the reason the 3, 2, 1 rule for backing up exists.
dardeaup•4mo ago
They might be singing this song now. (To the tune of 'Yesterday' from the Beatles).

    Yesterday,
    All those backups seemed a waste of pay.
    Now my database has gone away.
    Oh I believe in yesterday.

    Suddenly,
    There’s not half the files there used to be,
    And there’s a deadline
    hanging over me.
    The system crashed so suddenly.

    I pushed something wrong
    What it was I could not say.
    Now my data’s gone
    and I long for yesterday-ay-ay-ay.

    Yesterday,
    The need for back-ups seemed so far away.
    Thought all my data was here to stay,
    Now I believe in yesterday.
_9ptr•4mo ago
For the German enjoyers among us I recommend also this old song: https://www.youtube.com/watch?v=jN5mICXIG9M
Zacharias030•4mo ago
Thanks! mmd.
cramcgrab•4mo ago
Well that works out doesn’t it? Saves them from discovery.
rolph•4mo ago
repeat after me:

multiple copies; multiple locations; multiple formats.

miohtama•4mo ago
I thought clouds could not burn (:
wartywhoa23•4mo ago
They are clouds of smoke to begin with. The smoke from the joints of those who believed that storing their data somewhere out of their control was a good idea!
johnnienaked•4mo ago
Good example of a Technology trap
BurningFrog•4mo ago
"The day the cloud went up in smoke"
abujazar•4mo ago
LOL
Havoc•4mo ago
>the G-Drive’s structure did not allow for external backups.

ah the so called schrodingers drive. It's there unless you try to copy it

ahmgeek•4mo ago
nice
nntwozz•4mo ago
The Egyptians send their condolences.
gardnr•4mo ago
Has there been a more recent event, or are you referring to Alexandria?
Zacharias030•4mo ago
I think Alexandria.
em-bee•4mo ago
https://en.wikipedia.org/wiki/Library_of_Alexandria#Burning_...

the destruction of the library of alexandria is under dispute.

Zacharias030•4mo ago
touché
shadowgovt•4mo ago
Yikes. That is a nightmare scenario.
thepill•4mo ago
Watching Mr. Robot and seeing the burned batteries the same time...
dagaci•4mo ago
No problem — I'm sure their Supremely nice Leader up north kept a backup. He's thoughtful like that...
presentation•4mo ago
Should have given him back his stapler
Titan2189•4mo ago
I don't get it. Can you please explain the reference?
elcapitan•4mo ago
That's an "Office Space" reference, in which a grumpy employee burns down the IT company building.
Terr_•4mo ago
Perhaps extra-relevant to a story about data-loss, Milton was an employee who fell through the cracks in a broken corporate bureaucracy.

His was supposedly laid off years ago, but nobody actually stopped his paycheck, so he kept coming in to work assuming he was still employed, getting shuffled into increasingly-abusive working environments by callously indifferent managers who assume he's somebody else's problem.

latexr•4mo ago
It’s a reference to the movie Office Space and the Milton character.

https://en.wikipedia.org/wiki/Office_Space

mekoka•4mo ago
Or a piece of cake.
zb3•4mo ago
Well, now they'll have to negotiate with North Korea to get these backups..
pengaru•4mo ago
I seem to have misplaced my tiny violin
mirekrusin•4mo ago
They should ask if North has a backup.
GPerson•4mo ago
Hope this happens to Altman’s data centers.
smlacy•4mo ago
Someone found the literal HCF instruction.
r011erba11•4mo ago
Too bad this can't happen everywhere.
ivape•4mo ago
I mean ... was making backups on the backlog at least? Can they at least point to the work item that was going to get done soonish?
redditor98654•4mo ago
May be a “fast follow”? Right after launch of the “MVP”?
odie5533•4mo ago
It got pushed a couple sprints and we've got it on the plan for next quarter as long as no new features come in before then.
FinnKuhn•4mo ago
If it wasn't it most certainly is now
vntok•4mo ago
Why, there's nothing left to backup?
FinnKuhn•4mo ago
I suspect that they rebuild the system. Arguing against backups after that will be next to impossible.
system2•4mo ago
I wonder how many IT professionals were begging some incompetent upper management official to do this the right way, but were ignored daily. You'd think there would be concrete policies to prevent these things...
zulban•4mo ago
If I worked there I'd have had a hard time believing there were really no backups. Governments can be very nebulous.
kristianc•4mo ago
The government official who insisted that commercial AWS/GCP/Azure couldn't possibly be trusted with keeping the information will be keeping their head low for a few days then...

"The Interior Ministry explained that while most systems at the Daejeon data center are backed up daily to separate equipment within the same center and to a physically remote backup facility, the G-Drive’s structure did not allow for external backups."

This is absolutely wild.

zwnow•4mo ago
Rightfully did not trust these companies. Sure what happened is a disaster for them, but you cant simply trust Amazon & Microsoft.
oceansky•4mo ago
For sure the only error here is zero redundancy.
kingnothing•4mo ago
Why not? You can easily encrypt your data before sending it for storage on on S3, for example.
zhouzhao•4mo ago
You can encrypt them at rest, but data that lies encrypted and is never touched, is useless data. You need to decrypt them as well. Also, plenty of incompetent devops around, and writing a decryption toolchain can be difficult.
kspacewalk2•4mo ago
Am I missing something? If you ever need to use this data, obviously you transfer it back to your premises and then decrypt it. Whether it's stored at Amazon or North Korean Government Cloud makes no difference whatsoever if you encrypt before and decrypt after transfer.
DarkmSparks•4mo ago
Encryption only protects data for an unknown period of time, not indefinately.
mikehotel•4mo ago
If your threat model includes the TLA types, then backup to a physical server you control in a location geographically isolated from your main location. Or to a local set of drives that you physically rotate to remote locations.
oceansky•4mo ago
They can take the data hostage, the foreign nation would have no recourse.
Imustaskforhelp•4mo ago
Have it in multiple countries with multiple providers if money isn't a concern.

And are we forgetting that they can literally have a multi cloud backup setup in their own country as well or incentivize companies to build their datacenters there in partnership with them of sorts with a multi cloud setup as I said earlier?

icedchai•4mo ago
Why write one when there are tools like “restic”?
mikehotel•4mo ago
Decryption is not usually an issue if you encrypt locally.

Tools like Kopia, Borg and Restic handle this and also include deduplication and other advanced features.

Really no excuse for large orgs or even small businesses and somewhat tech literate public.

AshamedCaptain•4mo ago
Is encryption, almost any form, really reliable protection for a countries' government entire data? I mean, this is _the_ ultimate playground for "state level actors" -- if someday there's a hole and it turns out it takes only 20 years to decrypt the data with a country-sized supercomputer, you can bet _this_ is what multiple alien countries will try to decrypt first.
lucb1e•4mo ago
You're assuming that this needs to protect...

> ... a countries' government entire data?

But the bulk of the data is "boring": important to individuals, but not state security ("sorry Jiyeong, the computer doesn't know if you are a government employee. Apologies if you have rent to make this month!")

There likely exists data where the risk calculation ends up differently, so that you wouldn't store it in this system. For example, for nuke launch codes, they might rather lose than loose them. Better to risk having to reset and re-arm them than to have them hijacked

> Is encryption, [in?] any form, really reliable protection

There's always residual risk. E.g.: can you guarantee that every set of guards that you have watching national datacenters is immune from being bribed?

Copying data around on your own territory thus also carries risks, but you cannot get around it if you want backups for (parts of) the data

People in this thread are discussing specific cryptographic primitives that they think are trustworthy, which I think goes a bit deeper than makes sense here. Readily evident is that there are ciphers trusted by different governments around the world for their communication and storage, and that you can layer them such that all need to be broken before arriving at the plain, original data. There is also evidence in the Snowden archives that (iirc) e.g. PGP could not be broken by the NSA at the time. Several ciphers held up for the last 25+ years and are not expected to be broken by quantum computers either. All of these sources can be drawn upon to arrive at a solid choice for an encryption scheme

makeitdouble•4mo ago
A foreign gov getting all your security researchers and staff's personal info with their family and tax and medical records doesn't sound great.

That's just from the top of my head. Exploiting such a trove of data doesn't sound complicated.

lucb1e•4mo ago
Yeah that ignores about two thirds of my point, including that it would never get to the "Exploiting such a trove of data doesn't sound complicated" stage with a higher probability than storing it within one's own territory
makeitdouble•4mo ago
I'm in agreement with your second point, I think moving data in the country isn't trivial either and requires a pretty strong system. I just don't have much to say on that side, so didn't comment on it.
kazinator•4mo ago
You and I can encrypt our data before saving it into the cloud, because we have nothing of value or interest to someone with the resources of a state.

Sometimes sensitive data at the government level has a pretty long shelf life; you may want it to remain secret in 30, 50, 70 years.

waterTanuki•4mo ago
I don't see how this is any different than countries putting significant portions of their gold & currency reserves in the NY Federal Reserve Bank. If for some reason the U.S. just decided to declare "Your monies are all mine now" the effects would be equally if not more devastating than a data breach.
don_esteban•4mo ago
Exactly that happened to Russia, Iran, Venezuela
kazinator•4mo ago
Not North Korea though; they just have hundreds of thousands of dollars of unpaid parking tickets invested in the USA, which is a negative.

https://www.nbcnewyork.com/news/local/north-korea-parking-ti... [2017]

tsimionescu•4mo ago
The difference is that there are sometimes options to recover the money, and at least other countries will see and know that this happened, and may take some action.

A data breach, however, is completely secret - both from you and from others. Another country (not even necessarily the one that is physically hosting your data) may have access to your data, and neither you nor anyone else would necessarily know.

Den_VR•4mo ago
On the Microsoft side CVE-2025–55241 is still pretty recent.

https://news.ycombinator.com/item?id=45282497

politelemon•4mo ago
S3 features have saved our bacon a number of times. Perhaps your experience and usage is different. They are worth trusting with business critical data as long as you're following their guidance. GCP though have not proven it, their data loss news is still fresh in my mind.
hosh•4mo ago
Were you talking about this incidence? https://arstechnica.com/gadgets/2024/05/google-cloud-acciden...

I am currently evaluating between GCP and AWS right now.

Imustaskforhelp•4mo ago
I read the article and it seems that, that thing happened because their account got deleted and here is something from the article you linked

Google Cloud is supposed to have safeguards that don't allow account deletion, but none of them worked apparently, and the only option was a restore from a separate cloud provider (shoutout to the hero at UniSuper who chose a multi-cloud solution).

If you are working with really important software, please follow the 3-2-1 EVEN with cloud providers I suppose if you genuinely want ABSOLUTE guarantee I suppose, but it depends on how important the data is I suppose for the prices.

I have thought about using some cheap like backblaze and wasabi and others for the 3-2-1 for backups I suppose I am not sure but I do think that this incident was definitely a bit interesting to read into and I will read more about it, I do remember it from kevin fang's video but this article is seriously good and I will read it later, bookmarked.

fabian2k•4mo ago
Using the cloud would have been the easiest way to achieve the necessary redundancy, but by far not the only one. This is just a flawed concept from the start, with no real redundancy.
DarkmSparks•4mo ago
But not security. And for governmental data security is a far more important consideration.

not losing data and keeping untrusted parties out of your data is a hard problem, that "cloud" aka "stored somewhere that is accessible by agents of a foreign nation" does not solve.

freehorse•4mo ago
As OP says, cloud is not the only solution, just the easiest. They should probably have had a second backup in a different building. It would probably require a bit more involvement, but def doable.
DrewADesign•4mo ago
It's the government of South Korea, which has a nearly 2 trillion dollar GDP. Surely they could have built a few more data centers connected with their own fiber if they were that paranoid about it.
miken123•4mo ago
Because these companies never lose data, like during some lightning strikes, oh wait: https://www.bbc.com/news/technology-33989384

As a government you should not be putting your stuff in an environment under control of some other nation, period. That is a completely different issue and does not really relate to making backups.

firesteelrain•4mo ago
For this reason, Microsoft has Azure US Government, Azure China etc
whatevaa•4mo ago
Yeah, I heard that consumer clouds are only locally redundant and there aren't even backups. So big DC damage could result in data loss.
lima•4mo ago
What do you mean by "consumer clouds"?
whatevaa•4mo ago
I refer to stuff like onedrive/gdrive/dropbox.
lima•3mo ago
It's certainly not the case for Google Drive, which is geo-replicated, and I would be very surprised if it's true for any other major cloud.
alwa•4mo ago
I mean… at the risk of misinterpreting sarcasm—

Except for the backup strategy said consumers apply to their data themselves, right?

If I use a service called “it is stored in a datacenter in Virginia” then I will not be surprised when the meteor that hits Virginia destroys my data. For that reason I might also store copies of important things using the “it is stored in a datacenter in Oregon” service or something.

whatevaa•4mo ago
You might expect backups in case of fire, though. Even if data is not fully up to date.
Johnny555•4mo ago
By default, Amazon S3 stores data across at least separate datacenters that are in the same region, but are physically separate from each other:

Amazon S3 provides a highly durable storage infrastructure designed for mission-critical and primary data storage. S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, S3 Glacier Instant Retrieval, S3 Glacier Flexible Retrieval, and S3 Glacier Deep Archive redundantly store objects on multiple devices across a minimum of three Availability Zones in an AWS Region. An Availability Zone is one or more discrete data centers with redundant power, networking, and connectivity in an AWS Region. Availability Zones are physically separated by a meaningful distance, many kilometers, from any other Availability Zone, although all are within 100 km (60 miles) of each other.

You can save a little money by giving up that redundancy and having your data i a single AZ:

The S3 One Zone-IA storage class stores data redundantly across multiple devices within a single Availability Zone

For further redundancy you can set up replication to another region, but if I needed that level of redundancy, I'd probably store another copy of data with a different cloud provider so an AWS global failure (or more likely, a billing issue) doesn't leave my data trapped in one vendor).

I believe Google and Azure have similar levels of redundancy levels in their cloud storage.

ncruces•4mo ago
“The BBC understands that customers, through various backup technologies, external, were able to recover all lost data.”

You backup stuff. To other regions.

littlestymaar•4mo ago
But the Korean government didn't backup, that's the problem in the first place here…
ncruces•4mo ago
Sure. Using a cloud can make that more convenient. But obviously not so if you then keep all your data in the same region, or even “availability-zone” (which seems to be the case for the all “lost to lightening strikes” data here).
lima•4mo ago
...on a single-zone persistent disk: https://status.cloud.google.com/incident/compute/15056#57195...

> GCE instances and Persistent Disks within a zone exist in a single Google datacenter and are therefore unavoidably vulnerable to datacenter-scale disasters.

Of course, it's perfectly possible to have proper distributed storage without using a cloud provider. It happens to be hard to implement correctly, so apparently, the SK government team in question just decided... not to?

kspacewalk2•4mo ago
>As a government you should not be putting your stuff in an environment under control of some other nation, period.

Why? If you encrypt it yourself before transfer, the only possible control some_other_nation will have over you or your data is availability.

littlestymaar•4mo ago
First of all, you cannot do much if you keep all the data encrypted on the cloud (basically just backing things up, and hope you don't have to fetch it given the egress cost). Also, availability is exactly the kind of issue that a fire cause…
creddit•4mo ago
Yeah backups would’ve been totally useless in this case. All South Korea could’ve done is restore their data from the backups and avoid data loss.
littlestymaar•4mo ago
What part of the incident did you miss: the problem here was that they didn't backup in the first place.

You don't need the Cloud for backups, and there's no reason to believe that they would have backuped their data while using the cloud more than what they did with their self-hosting…

shakna•4mo ago
You're forgetting that you're talking nation states, here. Breaking encryption is in fact the role of the people you are giving access.

Sovereign delivery makes sense for _nations_.

bombcar•4mo ago
You can use and abuse encrypted one time pads and multiple countries to guarantee it’s not retrievable.
makeitdouble•4mo ago
You're assuming a level of competency that's hard to warrant at this point.
Imustaskforhelp•4mo ago
If your threat model is this high that you assume encryption breaking to be into your threat model, then maybe you do need a level of comeptency in the process as well.

They have 2 Trillion $ economy. I am sure that competency shouldn't be the thing that they should be worrying at that scale but at the same time I know those 2 trillion $ don't really make them more competent but I just want to share that it was very possible for them to teach/learn the competency

Maybe this incident teaches us atleast something. Definitely something to learn here though. I am interested in how the parent comment suggests sharing one time pad or rather a practical way for them to do so I suppose since I am genuinely curious as most others refer to using the cloud like aws etc. and I am not sure how much they can share something like one time pad and at the scale of petabytes and more, I can maybe understand it but I would love if the GP can tell me a practical way of doing so to atleast have more safety I suppose than encryption methods I suppose..

makeitdouble•4mo ago
I think it doesn't need to be the encryption breaking per se.

It could be a gov laptop with the encryption keys left at a bar. Or the wrong keys saved on the system and the backups can't actually be decrypted. Or the keys being reused at large scale and leaked/guessed from lower security area. etc.

Relying on encryption requires operation knowledge and discipline. At some point, a base level of competency is required anyway, I'm not just sure encryption would have saved them as much as we'd wish it would.

To your point, I'd assume high profile incidents like this one will put more pressure to do radical changes, and in particular to treat digital data as a more critical asset that you can't hand down to the crookest corrupt entity willy nilly just for the kickback.

South Korea doesn't lack competent people, but hiring them and letting them at the helm sounds like a tough task.

NegativeK•4mo ago
Using a OTP in your backup strategy adds way more complexity, failure modes, and costs with literally no improvement in your situation.
firesteelrain•4mo ago
I know there is legit hate for VMWare/Broadcom but there is a legit case to be made for VCF with an equivalent DR setup where you have replication enabled by Superna and Dell PowerProtect Data Domain protecting both local and remote with Thales Luna K160 KMIP for the data at rest encryption for the vSAN.

To add, use F710s, H710s and then add ObjectScale storage for your Kubernetes workloads.

This setup repatriates your data and gives you a Cloud like experience. Pair it with like EKS-A and you have a really good on premises Private Cloud that is resilient.

threeducks•4mo ago
This reads very similar to the Turbo Encabulator video.
eCa•4mo ago
Agree completely that it's absolute wild to run such a system without backups. But at this point no government should keep critical data on foreign cloud storage.
stogot•4mo ago
Why not? If the region is in country, encrypted, and with proven security attestations validated by third parties, a backup to a cloud storage would be incredibly wise. Otherwise we might end up reading an article about a fire burning down a single data center
g-b-r•4mo ago
And which organization has every file, from each of their applications using the cloud, encrypted *before* it is sent to the cloud?
exe34•4mo ago
They're talking about backups. you can absolutely send an updated copy every night.
g-b-r•4mo ago
True, the user I was replying to only mentioned backups.

For those there's sure no problem

crazygringo•4mo ago
Exactly.

Like, don't store it in the cloud of an enemy country of course.

But if it's encrypted and you're keeping a live backup in a second country with a second company, ideally with a different geopolitical alignment, I don't see the problem.

OvbiousError•4mo ago
Enemy country in the current geopolitical climate is an interesting take. Doesn't sound like a great idea to me tbh.
deaddodo•4mo ago
There are a lot of gray relations out there, but there’s almost no way you could morph the current US/SK relations to one of hostility; beyond a negligible minority of citizens in either being super vocal for some perceived slights.
9dev•4mo ago
Trump will find a way, just as he did with Canada for example (i mean, Canada of all places). Things are way more in flux than they used to be. There’s no stability anymore.
shantara•4mo ago
A year ago, I would have easily claimed the same thing about Denmark.
throwaway2037•4mo ago
I don't follow. Can you share more context?
marcosdumay•4mo ago
The US is threatening to invade Greenland, what means active war with Denmark.
throwaway2037•4mo ago
Great point! I forgot that Greenland is not (yet) an independent nation. It is still a part of Denmark.
smcin•4mo ago
The current US admin's threats to annex Greenland, an autonomous territory of Denmark.
gitremote•4mo ago
You think when ICE arrested over 300 South Korean citizens who were setting up a Georgia Hyundai plant and subjected them to alleged human rights abuses, it was only a perceived slight?

https://www.huffpost.com/entry/south-korea-human-rights-inve...

How Trump’s ICE Raid Triggered Nationwide Outrage in South Korea

https://www.newsweek.com/trump-ice-raid-hyundai-outrage-sout...

'The raid "will do lasting damage to America's credibility," John Delury, a senior fellow at the Asia Society think tank, told Bloomberg. "How can a government that treats Koreans this way be relied upon as an 'ironclad' ally in a crisis?"'

deaddodo•4mo ago
Yes.
kergonath•4mo ago
One could have said the exact same thing about US-EU relations just a couple of years ago. And yet, here we are.
t-3•4mo ago
From the perspective of securing your data, what's the practical difference between a second country and an enemy country? None. Even if it's encrypted data, all encryption can be broken, and so we must assume it will be broken. Sensitive data shouldn't touch outside systems, period, no matter what encryption.
Avamander•4mo ago
Any even remotely proper symmetric encryption scheme "can be broken" but only if you have a theoretical adversary with nearly infinite power and time, which is in practice absolutely utterly impossible.

I'm sure cryptographers would love to know what makes it possible for you to assume that say AES-256 or AES-512 can be broken in practice for you to include it in your risk assessment.

9dev•4mo ago
You’re assuming we don’t get better at building faster computers and decryption techniques. If an adversary gets hold of your encrypted data now, they can just shelf it until cracking becomes eventually possible in a few decades. And as we’re talking about literal state secrets here, they may very well still be valuable by then.
stavros•4mo ago
Barring any theoretical breakthroughs, AES can't be broken any time soon even if you turned every atom in the universe into a computer and had them all cracking all the time. There was a paper that does the math.
Avamander•4mo ago
You make an incorrect assumption about my assumptions. Faster computers or decryption techniques will never fundamentally "break" symmetric encryption. There's no discrete logarithm or factorization problem to speed up. Someone might find ways to make for example AES key recovery somewhat faster, but the margin of safety in those cases is still incredibly vast. In the end there's such an unfathomably vast key space to search through.
immibis•4mo ago
You're also assuming nobody finds a fundamental flaw in AES that allows data to be decrypted without knowing the key and much faster than brute force. It's pretty likely there isn't one, but a tiny probability multiplied by a massive impact can still land on the side of "don't do it".
Avamander•4mo ago
I'm not. It's just that the math behind AES is very fundamental and incredibly solid compared to a lot of other (asymmetric) cryptographic schemes in use today. Calling the chances of it tiny instead of nearly nonexistent sabotages almost all risk assessments. Especially if it then overshadows other parts of that assessment (like data loss). Even if someone found "new math" and it takes very optimistically 60 years, of what value is that data then? It's not an useful risk assessment if you assess it over infinite time.

But you could also go with something like OTP and then it's actually fundamentally unbreakable. If the data truly is that important, surely double the storage cost would also be worth it.

XorNot•4mo ago
The risk that the key leaks through an implementation bug or a human intelligence source.

Exfiltrating terabytes of data is difficult, exfiltrating 32 bytes is much less so.

Avamander•4mo ago
That's very far from the encryption itself being broken though. If that were the claim, I would have had no complaints.
crazygringo•4mo ago
> From the perspective of securing your data, what's the practical difference between a second country and an enemy country? None.

Huh? An enemy country will shut off your access. Friendly countries don't.

> Even if it's encrypted data, all encryption can be broken, and so we must assume it will be broken.

This is a very, very hot take.

VirusNewbie•4mo ago
A statement like "all encryption can be broken" is about as useful as "all systems can be hacked" in which case, not putting data in the cloud isn't really a useful argument.
manquer•4mo ago
The problem is money,

you are seeing the local storage decision under the lens of security, that is not the real reason for this type of decision.

While it may have been sold that way, reality is more likely the local DC companies just lobbied for it to be kept local and cut as many corners as they needed. Both the fire and architecture show they did cut deeply.

Now why would a local company voluntary cut down its share of the pie by suggesting to backup store in a foreign country. They are going to suggest keep in country or worse as was done here literally the same facility and save/make even more !

The civil service would also prefer everything local either for nationalistic /economic reasons or if corrupt then for all kick backs each step of the way, first for the contract, next for the building permits, utilities and so on.

vkou•4mo ago
A country can become an adversary faster than a government can migrate away from it.
crazygringo•4mo ago
Hence a backup country. I already covered that.

But while countries go from unfriendly to attacking you overnight, they don't generally go from friendly to attacking you overnight.

vkou•4mo ago
Overnight, Canada went from being an ally of the US to being threatened by annexation (and target #1 of an economic war).

If the US wants its state-puppet corporations to be used for integral infrastructure by foreign governments, it's going to need to provide some better legal assurances than 'trust me bro'.

(Some laws on the books, and a congress and a SCOTUS that has demonstrated a willingness to enforce those laws against a rogue executive would be a good start.)

shakna•4mo ago
Microsoft has already testified that the American government maintains access to their data centres, in all regions. It likely applies to all American cloud companies.

America is not a stable ally, and has a history of spying on friends.

So unless the whole of your backup is encrypted offline, and you trust the NSA to never break the encryption you chose, its a national security risk.

bink•4mo ago
Not only does the NSA break encryption but they actually sabotage algorithms to make them easier to break when used.
edoceo•4mo ago
Can the NSA break the Ed25519 stuff? Like the crypto_box from libsodium?
immibis•4mo ago
ed25519 (and ec25519) are generally understood not to be backdoored by the NSA, or weak in any known sense.

The lack of a backdoor can be proven by choosing parameters according to straightforward reasons that do not allow the possibility for the chooser to insert a backdoor. The curve25519 parameters have good reasons why they are chosen. By contrast, Dual_EC_DRBG contains two random-looking numbers, which the NSA pinky-swears were completely random, but actually they generated them using a private key that only the NSA knows. Since the NSA got to choose any numbers to fit there, they could do that. When something is, like, "the greatest prime number less than 2^255" you can't just insert the public key of your private key into that slot because the chance the NSA can generate a private key whose public key just happens to match the greatest prime number less than 2^255 is zero. These are called "nothing up my sleeve numbers".

This doesn't prove the algorithm isn't just plain old weak, but nobody's been able to break it, either. Or find any reason why it would be breakable. Elliptic curves being unbreakable rests on the discrete logarithm of a random-looking permutation being impossible to efficiently solve, in a similar way to how RSA being unbreakable relies on nobody being able to efficiently factorize very big numbers. The best known algorithms for solving discrete logarithm require O(sqrt(n)) time, so you get half the bits of security as the length of the numbers involved; a 256-bit curve offers 128 bits of security, which is generally considered sufficient.

(Unlike RSA, you can't just arbitrarily increase the bit length but have to choose a completely new curve for each bit length, unfortunately. ed25519 will always be 255 bits, and if a different length is needed, it'll be similar but called something else. On the other hand, that makes it very easy to standardize.)

jacquesm•4mo ago
> but nobody's been able to break it, either.

Absence of evidence is not evidence of absence. It could well be that someone has been able to break it but that they or that organization did not publish.

edoceo•4mo ago
How could you not!? Think of the bragging rights. Or, perhaps the havoc. That persons could sit on this secret for long periods of time seem... difficult to maintain. If you know it's broken and you've discovered it; surely someone else could too. And they've also kept the secret?

I agree on the evidence/absence of conjecture. However, the impact of the secret feels impossible to keep.

Time will, of course, tell; it wouldn't be the first occasion where that has embarrassed me.

jacquesm•4mo ago
There are a large number mathematicians gainfully employed in breaking such things without talking about it.
fragmede•4mo ago
Some people are able to shut the hell up. If you're not one of them, you're not getting told. Some people can keep a secret. Some people can't. Others get shot. Warframe is a hilarious example where people can't shut the hell up about things they know they should keep quiet about.
afthonos•4mo ago
It is, actually. A correct statement would be “absence of proof is not proof of absence”, but “evidence” and “proof” are not synonyms.
Avamander•4mo ago
Large amounts of data, like backups, are encrypted using a symmetric algorithm. Which makes the strength of Ed25519 somewhat unimportant in this context.
TMWNN•4mo ago
DES is an example of where people were sure that NSA persuaded IBM to weaken it but, to quote Bruce Schneier, "It took the academic community two decades to figure out that the NSA 'tweaks' actually improved the security of DES". <https://www.cnet.com/news/privacy/saluting-the-data-encrypti...>
JumpCrisscross•4mo ago
> America is not a stable ally, and has a history of spying on friends

America is a shitty ally for many reasons. But spying on allies isn’t one of them. Allies spy on allies to verify they’re still allies. This has been done throughout history and is basic competency in statecraft.

9dev•4mo ago
That doesn’t capture the full truth. Since Snowden, we have hard evidence the NSA has been snooping on foreign governments and citizens alike with the purpose of harvesting data and gathering intelligence, not just to verify their loyalty.

No nation should trust the USA, especially not with their state secrets, if they can help it. Not that other countries are inherently more trustworthy, but the US is a known bad actor.

JumpCrisscross•4mo ago
> Since Snowden, we have hard evidence the NSA has been snooping on foreign governments and citizens alike

We also know this is also true for Russia, China and India. Being spied on is part of the cost of relying on external security guarantees.

> Not that other countries are inherently more trustworthy, but the US is a known bad actor

All regional and global powers are known bad actors. That said, Seoul is already in bed with Washington. Sending encrypted back-ups to an American company probably doesn't increase its threat cross section materially.

shakna•4mo ago
Being "in bed with Washington" doesn't really seem any kind of protection right now.

Case in point: https://en.wikipedia.org/wiki/2025_Georgia_Hyundai_plant_imm...

> The raid led to a diplomatic dispute between the United States and South Korea, with over 300 Koreans detained, and increased concerns about foreign companies investing in the United States.

9dev•4mo ago
> All regional and global powers are known bad actors.

That they are. Americans tend to view themselves as "the good guys" however, which is a wrong observation and thus needs pointing out in particular.

> That said, Seoul is already in bed with Washington. Sending encrypted back-ups to an American company probably doesn't increase its threat cross section materially.

If they have any secrets they attempt to keep even from Washington, they are contained in these backups. If that is the case, storing them (even encrypted) with an American company absolutely compromises security, even if there is no known threat vector at this time. The moment you give up control of your data, it will forever be subject to new threats discovered afterward. And that may just be something like observing the data volume after an event occurs that might give something away.

signatoremo•4mo ago
There is no such thing as good or trustworthy actors when it comes to state affairs. Each and every one attempt to spy on the others. Perhaps US have more resources to do so than some others.

You really have no evidence to back up your assertion, because you’d have to be an insider.

9dev•4mo ago
> There is no such thing as good or trustworthy actors when it comes to state affairs. Each and every one attempt to spy on the others. Perhaps US have more resources to do so than some others.

Perhaps is doing a lot of work here. They do, and they are. That is what the Snowden leaks proved.

> You really have no evidence to back up your assertion, because you’d have to be an insider.

I don't, because the possibility alone warrants the additional caution.

shakna•4mo ago
Didn't mean to imply one followed from the other. Rather that both combined creates a risk.
dralley•4mo ago
> France spies on the US just as the US spies on France, the former head of France’s counter-espionage and counter-terrorism agency said Friday, commenting on reports that the US National Security Agency (NSA) recorded millions of French telephone calls.

> Bernard Squarcini, head of the Direction Centrale du Renseignement Intérieur (DCRI) intelligence service until last year, told French daily Le Figaro he was “astonished” when Prime Minister Jean-Marc Ayrault said he was "deeply shocked" by the claims.

> “I am amazed by such disconcerting naiveté,” he said in the interview. “You’d almost think our politicians don’t bother to read the reports they get from the intelligence services.”

> “The French intelligence services know full well that all countries, whether or not they are allies in the fight against terrorism, spy on each other all the time,” he said.

> “The Americans spy on French commercial and industrial interests, and we do the same to them because it’s in the national interest to protect our companies.”

> “There was nothing of any real surprise in this report,” he added. “No one is fooled.”

ants_everywhere•4mo ago
France has had a reputation for being especially active in industrial espionage since at least the 1990s. Here's an article from 2011 https://www.france24.com/en/20110104-france-industrial-espio...

I always thought it was a little unusual that the state of France owns over 25% of the defense and cyber security company Thales.

kergonath•4mo ago
> I always thought it was a little unusual that the state of France owns over 25% of the defense and cyber security company Thales.

Unusual from an American perspective, maybe. The French state has stakes in many companies, particularly in critical markets that affect national sovereignty and security, such as defence or energy. There is a government agency to manage this: https://en.wikipedia.org/wiki/Agence_des_participations_de_l... .

mensetmanusman•4mo ago
Spies play one of the most important roles in global security.

People who don’t know history think spying on allies is bad.

terminalshort•4mo ago
There are no stable allies. No country spies on its friends because countries don't have friends, they have allies. And everybody spies on their allies.
neom•4mo ago
Good thing Korea has cloud providers, apparently Kakao has even gone...beyond the cloud!

https://kakaocloud.com/ https://www.nhncloud.com/ https://cloud.kt.com/

To name a few.

alephnerd•4mo ago
They are overwhelmingly whitelabeled providers. For example, Samsung SDI Cloud (the largest "Korean" cloud) is an AWS white label.

Korea is great at a lot of engineering disciplines. Sadly, software is not one of them, though it's slowly changing. There was a similar issue a couple years ago where the government's internal intranet was down a couple days because someone deployed a switch in front of outbound connections without anyone noticing.

It's not a talent problem but a management problem - similar to Japan's issues, which is unsurprising as Korean institutions and organizations are heavily based on Japanese ones from back in the JETRO era.

skissane•4mo ago
I spent a week of my life at a major insurance company in Seoul once, and the military style security, the obsession with corporate espionage, when all they were working on was an internal corporate portal for an insurance company… The developers had to use machines with no Internet access, I wasn’t allowed to bring my laptop with me lest I use it to steal their precious code. A South Korean colleague told me it was this way because South Korean corporate management is stuffed full of ex-military officers who take the attitudes they get from defending against the North with them into the corporate world; no wonder the project was having so many technical problems-but I couldn’t really solve them, because ultimately the problems weren’t really technical
ahartmetz•4mo ago
I've done some work for a large SK company and the security was manageable. Certainly higher than anything I've seen before or after and with security theater aspects, but ultimately it didn't seriously get in the way of getting work done.
skissane•4mo ago
I think it makes sense that although this is a widespread problem in South Korea, some places have it worse than others; you obviously worked at a place where the problem was more moderate. And I went there over a decade ago, and maybe even the place I was at has lightened up a bit since.
throwaway2037•4mo ago

    > South Korean corporate management is stuffed full of ex-military officers
For those unaware, all "able-bodied" South Korean men are required to do about two years of military service. This sentence doesn't do much for me. Also, please remember that Germany also had required military service until quite recently. That means anyone "old" (over 40) and doing corp mgmt was probably also a military officer.
skissane•4mo ago
The way it was explained to me was different... yes, all able-bodied males do national service. But there's a different phenomenon in which someone serves some years active duty (so this is not their mandatory national service, this is voluntary active duty service), in some relatively prestigious position, and then jumps ship to the corporate world, and they get hired as an executive by their ex-comrades/ex-superiors... so there ends up being a pipeline from more senior volunteer active duty military ranks into corporate executive ranks (especially at large and prestigious firms), and of course that produces a certain culture, which then tends to flow downhill
solarengineer•4mo ago
Did you happen to notice interesting phenomenon like “the role becomes a rank”?
jeena•4mo ago
The difference is that South Korea is currently technically still at war with North Korea.
yard2010•4mo ago
This - you and half of the smart people here in the comments clearly have no idea what it's like to live across the border from a country that wants you eradicated.
cthalupa•4mo ago
Depends on if these were commissioned officers or NCOs. Basically everyone reaches NCO by the end of service (used to be automatic, now there are tests that are primarily based around fitness), but when people specifically call out officers they tend to be talking about ones with a commission. You are not becoming a commissioned officer through compulsory service.
majewsky•4mo ago
> That means anyone "old" (over 40) and doing corp mgmt was probably also a military officer.

Absolutely not. It was very common in Germany to deny military service and instead do a year of civil service as a replacement. Also, there were several exceptions from the """mandatory""" military service. I have two brothers who had served, so all I did was tick a checkbox and I was done with the topic of military service.

hylaride•4mo ago
Also Israel - and their tech echo system is tier 1.

As somebody that has also done work in Korea (with on of their banks), my observation was that almost all decision making was top-down, and people were forced to do a ton of monotonous work based on the whims of upper management, and people below could not talk back. I literally stood and watched a director walk in after racking a bunch of equipment and commented that the disk arrays should be higher up. When I asked why (they were at the bottom for weight and centre of gravity reasons), he looked shocked that I even asked and tersely said that the blinking lights of the disks at eye level show the value of the purchase better.

I can't imagine writing software in that kind of environment. It'd be almost impossible to do clean work, and even if you did it'd get interfered with. On top of that nobody could go home before the boss.

I did enjoy the fact that the younger Koreans we were working with asked me and my colleague how old we were, because my colleague was 10 years older than me and they were flabbergasted that I was not deferring to him in every conversation, even though we were both equals professionally.

This was circa 2010, so maybe things are better, but oh my god I'm glad it was business trips and I was happy to be flying home each time (though my mouth still waters at the marinaded beef at the bbq restaurants I went to...).

alephnerd•4mo ago
Military culture in SK (especially amongst the older generation who served before democratization in the late 1990s) is extremely hierarchical.
ycombinatrix•4mo ago
All able bodied men don't become officers.
vgivanovic•4mo ago
I am very happy with the software that powers my Hyundai Tuscon hybrid. (It's a massive system that runs the gas and electric engines, recharging, shifting gears, braking, object detection, and a host of information and entertainment systems.) After 2 years, 0 crashes and no observable errors. Of course, nothing is perfect: maps suck. The navigation is fine; it's the display that is at least 2 decades behind the times.
jeena•4mo ago
I've been working for a Korean Hyundai supplier for two years training them in modern software development processes. The programming part is not a problem, they have a lot of talented people.

The big problem from my point of view is management. Everyone pushes responsibility and work all the way down to the developera so that they do basically everything themselves from negotiating with the customer, writing the requirements (or not) to designing the architecture, writing the code and testing the system.

If they're late,they just stay and work longer and on the weekends and sleep at the desk.

Kinrany•4mo ago
> If they're late,they just stay and work longer and on the weekends and sleep at the desk.

This is the only part that sounds bad? Negotiating with customers may require some help as well but it's better than having many layers in between.

krageon•4mo ago
If the dev does everything, their manager may as well be put in a basket and pushed down the river. You can be certain there are a lot of managers. The entire storyline sounds like enterprise illness to me to be honest.
tirant•4mo ago
I’ve driven a Tucson several times recently (rental). It did not crash but it was below acceptable. A 15 year old VW Golf has better handling than the Tucson.
throwaway2037•4mo ago

    > Korea is great at a lot of engineering disciplines. Sadly, software is not one of them
I disagree. People say the same about Japan and Taiwan (and Germany). IMHO, they are overlooking the incredible talents in embedded programming. Think of all of the electronics (including automobiles) produced in those countries.
justinclift•4mo ago
Embedded electronics, including from those countries, does not have an enviable reputation. :(
throwaway2037•4mo ago
What about automobiles from Japan, Korea, and Germany? They are world class. All modern cars must have millions of lines of code to run all kinds of embedded electronics. Do I misunderstand?
justinclift•4mo ago
Yet people complain about their many software and UI issues all the time.
alephnerd•4mo ago
Good point! I wasn't treat that as "software" in my answer but it's true that their embedded programming scene is fairly strong.
deaux•4mo ago
That doesn't seem accurate at all. The big 3 Korean clouds used inside Korea are NHN Cloud, Naver Cloud and now KT. Which one of these is whitelabeled? And what's the source on Samsung SDI Cloud being the "largest Korean cloud"? What metric?

NHN Cloud is in fact being used more and more in the government [1], as well as playing a big part in the recovery effort of this fire. [2]

No, unlike what you're suggesting, Korea has plenty of independent domestic cloud and the government has been adopting it more and more. It's not on the level of China, Russia or obviously the US, but it's very much there and accelerating quickly. Incomparable to places like the EU which still have almost nothing.

[1] https://www.ajunews.com/view/20221017140755363 - 2022, will have grown a lot now [2] https://www.mt.co.kr/policy/2025/10/01/2025100110371768374

dralley•4mo ago
Samsung owns Joyent
ciupicri•4mo ago
Nevertheless isn't Joyent registered in the US?
DaiPlusPlus•4mo ago
The last time I heard of Joyent was in the mid-2000s on John Gruber’s blog when it was something like a husband-and-wife operation and something to do with WordPress or MovableType - 20 years later now it’s a division of Samsung?

My head hurts

pjmlp•4mo ago
In the meantime, they sponsored development of node in its early days, created a could infrastructure based on OpenSolaris and eventually got acquired by Samsung.
dtech•4mo ago
Encrypted backups would have saved a lot of pain here
edoceo•4mo ago
Any backup would do at this point. I think the most best is: encrypted, off-site & tested monthly.
CamouflagedKiwi•4mo ago
And yet here is an example where keeping critical data off public cloud storage has been significantly worse for them in the short term.

Not that they should just go all in on it, but an encrypted copy on S3 or GCS would seem really useful right about now.

vladms•4mo ago
You can do a bad job with public or private cloud. What if they would have had the backup and lost the encryption key?

Cost wise probably having even a Korean different data center backup would not have been huge effort, but not doing it exposed them to a huge risk.

Cthulhu_•4mo ago
Then they didn't have a correct backup to begin with; for high profile organizations like that, they need to practice outages and data recovery as routine.

...in an ideal world anyway, in practice I've never seen a disaster recovery training. I've had fire drills plenty of times though.

hinkley•4mo ago
We’ve had Byzantine crypto key solutions since at least 2007 when I was evaluating one for code signing for commercial airplanes. You could put an access key on k:n smart cards, so that you could extract it from one piece of hardware to put on another, or you could put the actual key on the cards so burning down the data center only lost you the key if you locked half the card holders in before setting it on fire.
n5NOJwkc7kRC•4mo ago
SSS is from 1979. https://en.wikipedia.org/wiki/Shamir%27s_secret_sharing
hinkley•4mo ago
Rendering security concepts in hardware always adds a new set of concerns. Which Shamir spent a considerable part of his later career testing and documenting. If you look at side channel attacks you will find his name in the author lists quite frequently.
JumpCrisscross•4mo ago
> no government should keep critical data on foreign cloud storage

Primary? No. Back-up?

These guys couldn’t provision a back-up for their on-site data. Why do you think it was competently encrypted?

jacquesm•4mo ago
They fucked up, that much is clear but the should not have kept that data on foreign cloud storage regardless. It's not like there are only two choices here.
JumpCrisscross•4mo ago
> the should not have kept that data on foreign cloud storage regardless. It's not like there are only two choices here

Doesn't have to be an American provider (Though anyone else probably increases Seoul's security cross section. America is already its security guarantor, with tens of thousands of troops stationed in Korea.)

And doesn't have to be permanent. Ship encrypted copies to S3 while you get your hardenede-bunker domestic option constructed. Still beats the mess that's about to come for South Korea's population.

jacquesm•4mo ago
I'm aware of a big cloud services provider (I won't name any names but it was IBM) that lost a fairly large amount of data. Permanently. So that too isn't a guarantee. They simply should have made local and off-line backups, that's the gold standard, and to ensure that those backups are complete and can be used to restore from scratch to a complete working service.
nicolas_17•4mo ago
DigitalOcean lost some of my files in their object storage too: https://status.digitalocean.com/incidents/tmnyhddpkyvf

Using a commercial provider is not a guarantee.

lukevp•4mo ago
DO Spaces, for at least a year after launch, had no durability guarantees whatsoever. Perhaps they do now, but I wouldn’t compare DO in any meaningful way to S3, which has crazy high durability guarantees as well as competent engineering effort expended on designing and validating that durability.
xoa•4mo ago
>I'm aware of a big cloud services provider (I won't name any names but it was IBM) that lost a fairly large amount of data. Permanently. So that too isn't a guarantee.

Permanently losing data at a given store point isn't relevant to losing data overall. Data store failures are assumed or else there'd be no point in backups. What matters is whether failures in multiple points happen at the same time, which means a major issue is whether "independent" repositories are actually truly independent or whether (and to what extent) they have some degree of correlation. Using one or more completely unique systems done by someone else entirely is a pretty darn good way to bury accidental correlations with your own stuff, including human factors like the same tech people making the same sorts of mistakes or reusing the same components (software, hardware or both). For government that also includes political factors (like any push towards using purely domestic components).

>They simply should have made local and off-line backups

FWIW there's no "simply" about that though at large scale. I'm not saying it's undoable at all but it's not trivial. As is literally the subject here.

jacquesm•4mo ago
> Permanently losing data at a given store point isn't relevant to losing data overall.

I can't reveal any details but it was a lot more than just a given storage point. The interesting thing is that there were multiple points along the way where the damage would have been recoverable but their absolute incompetence made matters much worse to the point where there were no options left.

> FWIW there's no "simply" about that though at large scale. I'm not saying it's undoable at all but it's not trivial. As is literally the subject here.

If you can't do the job you should get out of the kitchen.

Dylan16807•4mo ago
In this context the entirety of IBM cloud is basically a single storage point.

(If IBM was also running the local storage then we're talking about a very different risk profile from "run your own storage, back up to a cloud" and the anecdote is worth noting but not directly relevant.)

hedora•4mo ago
If that’s the case, then they should make it clear they don’t provide data backup.

A quick search reveals IBM does still sell backup solutions, including ones that backup from multiple cloud locations and can restore to multiple distinct cloud locations while maintaining high availability.

So, if the claims are true, then IBM screwed up badly.

xoa•4mo ago
>I can't reveal any details but it was a lot more than just a given storage point

Sorry, not brain not really clicking tonight and used lazy imprecise terminology here, been a long one. But what I meant by "store point" was any single data repository that can be interacted with as a unit, regardless of implementation details, that's part of a holistic data storage strategy. So in this case the entirety of IBM would be a "storage point", and then your own self-hosted system would be another, and if you also had data replicated to AWS etc those would be others. IBM (or any other cloud storage provider operating in this role) effectively might as well simply be another hard drive. A very big, complex and pricey magic hard drive that can scale its own storage and performance on demand granted, but still a "hard drive".

And hard drives fail, and that's ok. Regardless of the internal details of how the IBM-HDD ended up failing, the only way it'd affect the overall data is if that failure happened simultaneously with enough other failures at local-HDD and AWD-HDD and rsync.net-HDD and GC-HDD etc etc that it exceeded available parity to rebuild. If these are all mirrors, then only simultaneous failure of every single last one of them would do it. It's fine for every single last one of them to fail... just separately, with enough of a time delta between each one that the data can be rebuilt on another.

>If you can't do the job you should get out of the kitchen.

Isn't that precisely what bringing in external entities as part of your infrastructure strategy is? You're not cooking in their kitchen.

jacquesm•4mo ago
Ah ok, clear. Thank you for the clarification. Some more interesting details: the initial fault was triggered by a test of a fire suppression system, that would have been recoverable. But someone thought they were exceedingly clever and they were going to fix this without any downtime and that's when a small problem became a much larger one, more so when they found out that their backups were incomplete. I still wonder if they ever did RCA/PM on this and what their lessons learned were. It should be a book sized document given how much went wrong. I got the call after their own efforts failed by one of their customers and after hearing them out I figured this is not worth my time because it just isn't going to work.
xoa•4mo ago
Thanks in turn for the details, always fascinating (and useful for lessons... even if not always for the party in question dohoho) to hear a touch of inside baseball on that kind of incident.

>But someone thought they were exceedingly clever and they were going to fix this without any downtime and that's when a small problem became a much larger one

The sentence "and that's when a small problem became a big problem" comes up depressingly frequently in these sorts of post mortems :(. Sometimes sort of feels like, along all the checklists and training and practice and so on, there should also simply be the old Hitchhiker's Guide "Don't Panic!" sprinkled liberally around along with a dabbing of red/orange "...and Don't Be Clever" right after it. We're operating in alternate/direct law here folks, regular assumptions may not hold. Hit the emergency stop button and take a breath.

But of course management and incentive structures play a role in that too.

mensetmanusman•4mo ago
They should have kept encrypted data somewhere else. If they know how to use encryption, it doesn’t matter where. Some people use stenographic backup on YouTube even.
bombcar•4mo ago
If you can’t encrypt your backups such that you could store them tatooed on Putin’s ass, you need to learn about backups more.
kube-system•4mo ago
Governments need to worry about

1. future cryptography attacks that do not exist today

2. Availability of data

3. The legal environment of the data

Encryption is not a panacea that solves every problem

charlieyu1•4mo ago
You don’t need cloud when you have the data centre, just backups in physical locations somewhere else
catlifeonmars•4mo ago
Others have pointed out: you need uptime too. So a single data center on the same electric grid or geographic fault zone wouldn’t really cut it. This is one of those times where it sucks to be a small country (geographically).
bell-cot•4mo ago
> so a single data center on the same electrical grid or geographic...

Yes, but your backup DC's can have diesel generators and a few weeks of on-site fuel. It has some quakes - but quake-resistant DC's exist, and SK is big enough to site 3 DC's at the corners of an equilateral triangle with 250km edges. Similar for typhoons. Invading NK armies and nuclear missiles are tougher problems - but having more geography would be of pretty limited use against those.

sneak•4mo ago
It's 2025. Encryption is a thing now. You can store anything you want on foreign cloud storage. I'd give my backups to the FSB.
justinclift•4mo ago
> I'd give my backups to the FSB.

Until you need them - like with the article here ;) - then the FSB says "only if you do these specific favours for us first...".

Cthulhu_•4mo ago
There's certifications too, which you don't get unless you conform to for example EU data protection laws. On paper anyway. But these have opened up Amazon and Azure to e.g. Dutch government agencies, the tax office will be migrating to Office365 for example.
tirant•4mo ago
Encryption does not ensure any type of availability.
preisschild•4mo ago
Especially on US cloud storage.

The data is never safe thanks to the US Cloud Act.

HardCodedBias•4mo ago
Why not?

Has there been any interruption in service?

pico303•4mo ago
What a lame excuse. “The G-Drive’s structure did not allow for backups” is a blatant lie. It’s code for, “I don’t value other employees’ time and efforts enough to figure out a reliable backup system; I have better things to do.”

Whoever made this excuse should be demoted to a journeyman ops engineer. Firing would be too good for them.

CoastalCoder•4mo ago
You could be right, but it could also be a bad summary or bad translation.

We shouldn't rush to judgement.

MBCook•4mo ago
It could be accurate. Let’s say, for whatever reason, it is.

Ok.

Then it wasn’t a workable design.

The idea of “backup sites” has existed forever. The fact you use the word “cloud” to describe your personal collection of servers doesn’t suddenly mean you don’t need backups in a separate physical site.

If the government mandates its use, it should have a hot site at a minimum. Even without that a physical backup in a separate physical location in case of fire/attack/tsunami/large band of hungry squirrels is a total must-have.

However it was decided that not having that was OK, that decision was negligence.

sph•4mo ago
Silly to think this is the fault of ops engineers. More likely, the project manager or C-suite didn't have time nor budget to allocate on disaster recovery.

The project shipped, it's done, they've already moved us onto the next task, no one wants to pay for maintenance anyway.

This has been my experience in 99% of the companies I have worked for in my career, while the engineers that built the bloody thing groan and are well-aware of all the failure modes of the system they've built. No one cares, until it breaks, and hopefully they get the chance to say "I **** told you this was inadequate"

littlestymaar•4mo ago
> The government official who insisted that commercial AWS/GCP/Azure couldn't possibly be trusted with keeping the information

They were still right though: it's absolutely clear without an ounce of doubt that whatever you put on an US cloud is being accessible by the US government, who can also decide to sanction you and deprive you from your ability to access the data yourself.

Not having backups is entirely retarded, but also completely orthogonal.

otterley•4mo ago
The U.S. Government can’t decrypt data for which it does not possess the key (assuming the encryption used is good).
dboreham•4mo ago
In theory. I'm very much happier to have my encrypted data also not be available to adversaries.
littlestymaar•4mo ago
Well first of all neither you and I knows the decryption capabilities of the NSA, all we know is that they have hired more cryptologists than the rest of the world combined.

Also, it's much easier for an intelligence service to get the hand on a 1kB encryption key than on a PB of data: the former is much easier to exfiltrate without being noticed.

And then I don't know why you bring encryption here: pretty much none of the use-case for using a cloud allow for fully encrypted data. (The only one that does is storing encrypted backups on the cloud, but the issue here is that the operator didn't do backups in the first place…)

otterley•4mo ago
1. More evidence suggests that NSA does not know how to decrypt state-of-the-art ciphers than suggests they do. If they did know, it's far less likely we'd have nation states trying to force Apple and others to provide backdoors for decryption of suspects' personal devices. (Also, as a general rule, I don't put too much stock in the notion that governments are far more competent than the private sector. They're made of people, they pay less than the private sector, and in general, if a government can barely sneeze without falling over, it's unlikely they can beat our best cryptologists at their own game.)

2. The operative assumption in my statement is that the government does not possess the key. If they do possess it, all bets are off.

3. This thread is about a hypothetical situation in which the Korean government did store backups with a U.S.-based cloud provider, and whether encryption of such backups would provide adequate protection against unwanted intrusion into the data held within.

littlestymaar•4mo ago
> 2. The operative assumption in my statement is that the government does not possess the key. If they do possess it, all bets are off.

All bets are off from the start. At some point the CIA managed to get their hands on the French nuclear keys…

> 3. This thread is about a hypothetical situation in which the Korean government did store backups with a U.S.-based cloud provider

This thread is about using US cloud providers, that's it, you are just moving the goalpost.

alwa•4mo ago
Not sure “sane backup strategy” and “park your whole government in a private company under American jurisdiction” are mutually exclusive. I feel like I can think of a bunch of things that a nation would be sad to lose, but would be even sadder to have adversaries rifling through at will. Or, for that matter, extort favors under threat of cutting off your access.

At least in this case you can track down said officials in their foxholes and give them a good talking-to. Good luck holding AWS/GCP/Azure accountable…

atoav•4mo ago
Well it is just malpractise. Even when I was an first semester art student I knew about the concept of off-site backups.
Nux•4mo ago
He may or may not have been right, but it's besides the point.

The 3-2-1 backup rule is basic.

StopDisinfo910•4mo ago
The issue here is not refusing to use a foreign third party. That makes sense.

The issue is mandating the use of remote storage and not backing it up. That’s insane. It’s like the most basic amount of preparation you do. It’s recommended to even the smallest of companies specifically because a fire is a risk.

That’s gross mismanagement.

chatmasta•4mo ago
Nothing increases the risk of servers catching fire like government investigators showing up to investigate allegations that North Korea hacked the servers.
ateng•4mo ago
It _almost_ sounds like you're suggesting the fire was deliberate!
wongarsu•4mo ago
It is very convenient timing
razakel•4mo ago
>This file contains the complete set of papers, except for a number of secret documents, a few others which are part of still active files, some correspondence lost in the floods of 1967...

>Was 1967 a particularly bad winter?

>No, a marvellous winter. We lost no end of embarrassing files.

url00•4mo ago
Yes minister! Great show that no one in the US has heard of which is a shame.
linksnapzz•4mo ago
"We must do something --> this is something --> We must do this!"
cm2187•4mo ago
Or investigations into a major financial scandal in a large French bank!

(While the Credit Lyonnais was investigated in the 90s, both the HQ and the site where they stored their archives were destroyed by fire within a few months)

acchow•4mo ago
> The issue here is not refusing to use a foreign third party. That makes sense.

Encrypt before sending to a third party?

jeroenhd•4mo ago
Of course you'd encrypt the data before uploading it to a third party, but there's no reason why that third party should be under control of a foreign government. South Korea has more than one data center they can store data inside of, there's no need to trust other governments sigh every byte of data you've gathered, even if there are no known backdoors or flaws in your encryption mechanism (which I'm sure some governments have been looking into for decades).
yetihehe•4mo ago
There is a reason that NIST recommends new encryption algorithms from time to time. If you get a copy of ALL government data, in 20 years you might be able to break encryption and get access to ALL government data from 20yr ago, no matter how classified they were, if they were stored in that cloud. Such data might still be valuable, because not all data is published after some period.
rstuart4133•4mo ago
That doesn't sound like a good excuse to me.

aes128 has been the formal standard for 23 years. The only "foreseeable" event that could challenge it is quantum computing. The likely post quantum replacement is ... aes256, which is already a NIST standard. NIST won't replace aes256 in the foreseeable future.

All that aside, there is no shortage of ciphers. If you are worried about one being broken, chain a few of them together.

And finally, no secret has to last forever. Western governments tend to declassify just about everything after 50 years. After 100 everyone involved is well and truly dead.

indolering•4mo ago
That's going away. We are seeing reduced deprecations of crypto algorithms over time AFAICT. The mathematical foundations are becoming better understood and the implementations' assurance levels are improving too. I think we are going up the bathtub curb here.

The value of said data diminishes with time too. You can totally do an off-site cloud backup with mitigation fallbacks should another country become unfriendly. Hell, shard them such that you need n-of-m backups to reconstruct and host each node in a different jurisdiction.

Not that South Korea couldn't have Samsung's Joyent acquisition handle it.

DoctorOetker•4mo ago
I don't consider myself special, anything I can find eventually proof assistants using ML will find...
gregoriol•4mo ago
If you are really paranoid to that point, you probably wouldn't follow NIST recommendations for encryption algorithms as it is part of the Department of Commerce of the United States, even more in today's context.
IAmBroom•4mo ago
The reason is because better ones have been developed, not because the old ones are "broken". Breaking algos is now a matter of computer flops spent, not clever hacks being discovered.

When the flops required to break an algo exceed the energy available on the planet, items are secure beyond any reasonable doubt.

vitorgrs•4mo ago
Would you think that the U.S would encrypt gov data and store on Alibaba's Cloud? :)
nurumaik•4mo ago
Why not?
cornholio•4mo ago
Because it lowers the threshold for a total informational compromise attack from "exfiltrate 34PB of data from secure govt infrastructure" down to "exfiltrate 100KB of key material". You can get that out over a few days just by pulsing any LED visible from outside an air-gapped facility.
dkga•4mo ago
Wait what?
lambdaone•4mo ago
There are all sorts of crazy ways of getting data out of even air-gapped machines, providing you are willing to accept extremely low data rates to overcome attenuation. Even with million-to-one signal-to-noise ratio, you can get significant amounts of key data out in a few weeks.

Jiggling disk heads, modulating fan rates, increasing and decreasing power draw... all are potential information leaks.

ta20240528•4mo ago
> There are all sorts of crazy ways of getting data out of even air-gapped machines.

Chelsea Manning apparently did it by walking in and out of the facility with a CD marked 'Lady Gaga'. Repeatedly

https://www.theguardian.com/world/2010/nov/28/how-us-embassy...

abujazar•4mo ago
On which TV show?
WCSTombs•4mo ago
As of today, there's no way to prove the security of any available cryptosystem. Let me say that differently: for all we know, ALL currently available cryptosystems can be easily cracked by some unpublished techniques. The only sort-of exception to that requires quantum communication, which is nowhere near practicability on the scale required. The only evidence we have that the cryptography that we commonly use is actually safe is that it's based on "hard" math problems that have been studied for decades or longer by mathematicians without anyone being able to crack them.

On the other hand, some popular cryptosystems that were more common in the past have been significantly weakened over the years by mathematical advances. Those were also based on math problems that were believed to be "hard." (They're still very hard actually, but less so than we thought.)

What I'm getting at is that if you have some extremely sensitive data that could still be valuable to an adversary after decades, you know, the type of stuff the government of a developed nation might be holding, you probably shouldn't let it get into the hands of an adversarial nation-state even encrypted.

yard2010•4mo ago
Thank you for writing this post. This should be the top comment. This is a state actors game, the rules are different.
Intermernet•4mo ago
While I understand what you're saying, you can extend this logic to such things as faster-than-light travel, over-unity devices, time travel etc. They're just "hard" math problems.

The current state of encryption is based on math problems many levels harder than the ones that existed a few decades ago. Most vulnerabilities have been due to implementation bugs, and not actual math bugs. Probably the highest profile "actual math" bug is the DUAL_EC_DRBG weakness which was (almost certainly) deliberately inserted by the NSA, and triggered a wave of distrust in not just NIST, but any committee designed encryption standards. This is why people prefer to trust DJB than NIST.

There are enough qualified eyes on most modern open encryption standards that I'd trust them to be as strong as any other assumptions we base huge infrastructure on. Tensile strengths of materials, force of gravity, resistance and heat output of conductive materials, etc, etc.

The material risk to South Korea was almost certainly orders of magnitude greater by not having encrypted backups, than by having encrypted backups, no matter where they were stored (as long as they weren't in the same physical location, obviously).

famouswaffles•4mo ago
>While I understand what you're saying, you can extend this logic to such things as faster-than-light travel, over-unity devices, time travel etc. They're just "hard" math problems.

No you can't. Those aren't hard math problems. They're Universe breaking assertions.

This is not the problem of flight. They're not engineering problems. They're not, "perhaps in the future, we'll figure out..".

Unless our understanding of physics is completely wrong, then None of those things are ever going to happen.

Intermernet•4mo ago
According to our understanding of physics, which is based on our understanding of maths, the time taken to brute force a modern encryption standard, even with quantum computers, is longer than the expected life of the universe. The likely-hood of "finding a shortcut" to do this is in the same ball-park as "finding a shortcut" to tap into ZPE or "vacuum energy" or create worm-holes. The maths is understood, and no future theoretical advances can change that. It would involve completely new maths to break these. We passed the "if only computers were a few orders of magnitude faster it's feasible" a decade or more ago.
WCSTombs•4mo ago
Sorry, I don't think this is true. There is basically no useful proven lower bound on the complexity of breaking popular cryptosystems. The math is absolutely not understood. In fact, it is one of the most poorly understood areas of mathematics. Consider that breaking any classical cryptosystem is in the complexity class NP, since if an oracle gives you the decryption key, you can break it quickly. Well we can't even prove that NP != P, i.e., that there even exists a problem where having such an oracle gives you a real advantage. Actually, we can't even prove that PSPACE != P, which should be way easier than proving NP != P if it's true.
ants_everywhere•4mo ago
> The only evidence we have that the cryptography that we commonly use is actually safe is that it's based on "hard" math problems that have been studied for decades or longer by mathematicians without anyone being able to crack them.

Adding to this...

Most crypto I'm aware of implicitly or explicitly assumes P != NP. That's the right practical assumption, but it's still an major open math problem.

If P = NP then essentially all crypto can be broken with classical (i.e. non-quantum) computers.

I'm not saying that's a practical threat. But it is a "known unknown" that you should assign a probability to in your risk calculus if you're a state thinking about handing over the entirety of your encrypted backups to a potential adversary.

Most of us just want to establish a TLS session or SSH into some machines.

blacklion•4mo ago
One-time pad is provable secure. But it is not useful for backups, of course.
kmoser•4mo ago
Even OTP is not secure if others have access to it.
IAmBroom•4mo ago
Every castle wall can be broken with money.
otterley•4mo ago
How much money is required to decrypt a file encrypted with a 256-bit AES key?
blacklion•4mo ago
How much person(s) who know the key will take to change country and don't work anymore?

Or how much it is cost to kidnap significant one of key bearer(s)?

I think, it is very reasonable sums for governments of almost any country.

otterley•4mo ago
I think you assume that encryption keys are held by people like a house key in their pocket. That's not the case for organizations who are security obsessed. They put their keys in HSMs. They practice defense in depth. They build least-privilege access controls.
thyristan•4mo ago
OTP can be useful especially for backups. Use a fast random number generator (real, not pseudo), write output to fill tape A. XOR the contents of tape A to your backup datastream and write result to Tape B. Store tape A and B in different locations.
blacklion•4mo ago
But you have one copy of the key stream. It is not safe. You need at least two places to store at least two copies of the key stream. You cannot store it in non-friendly cloud (and this thread started from backing up government sensitive data into cloud owned by other country, possibly adversary one.

If you have two physically separate places which you could trust key stream, you could use them to backup non-encrypted (or "traditionally" encrypted) data itself, without any OTP.

thyristan•4mo ago
You may want some redundancy because needing both tapes increases the risk to your backup. You could just backup more often. You could use 4 locations, so you have redundand keystreams and redundant backup streams. But in general, storing the key stream follows the same necessities as storing the backup or some traditional encryption keys for a backup. But in general, your backup already is a redundancy, and you will usually do multiple backups in time intervals, so it really isn't that bad.

Btw, you really really need a fresh keystream for each and every backup. You will have as many keystream tapes as you have backup tapes. Re-using the OTP keystream enables a lot of attacks on OTP, e.g. by a simple chosen plaintext an attacker can get the keystream from the backup stream and then decrypt other backup streams with it. XORing similar backup streams also gives the attacker an idea which bits might have changed.

And there is a difference to storing things unencrypted in two locations: If an attacker, like some evil maid, steals a tape in one location, you just immediately destroy its corresponding tape in the other location. That way, the stolen tape will forever be useless to the attacker. Only an attacker that can steal a pair of corresponding tapes in both locations before the theft is noticed could get at the plaintext.

Chris2048•4mo ago
> could still be valuable to an adversary after decades

What kind of information might be valuable after so long?

skrause•4mo ago
Because even when you encrypt the foreign third party can still lock you out of your data by simply switching off the servers.
raxxorraxor•4mo ago
Why make yourself dependent on a foreign country for your own sensitive data?

You have to integrate the special software requirements to any cloud storage anyway and hosting a large amount of files isn't an insurmountable technical problem.

If you can provide the minimal requirements like backups, of course.

arcfour•4mo ago
Presumably because you aren't capable of building what that foreign country can offer you yourself.

Which they weren't. And here we are.

cesarb•4mo ago
> Encrypt before sending to a third party?

That sounds great, as long as nobody makes any mistake. It could be a bug on the RNG which generates the encryption keys. It could be a software or hardware defect which leaks information about the keys (IIRC, some cryptographic system are really sensitive about this, a single bit flip during encryption could make it possible to obtain the private key). It could be someone carelessly leaving the keys in an object storage bucket or source code repository. Or it could be deliberate espionage to obtain the keys.

schainks•4mo ago
Call me a conspiracy theorist, but this kind of mismanagement is intentional by design so powerful people can hide their dirty laundry.
FooBarWidget•4mo ago
Never attribute to malice what can be attributed to stupidity.

There was that time when some high profile company's entire Google Cloud account was destroyed. Backups were on Google Cloud too. No off-site backups.

schainks•4mo ago
One of the data integrity people sadly committed suicide as a result of this fire, so I am also thinking this was an incompetence situation (https://www.yna.co.kr/view/AKR20251003030351530).

For the budget spent, you’d think they would clone the setup in Busan and sync it daily or something like this in lieu of whatever crazy backup they needed to engineer but couldn’t.

wjnc•4mo ago
You have to balance that with how low can you expect human beings to lower their standards when faced with bureaucratic opposition. No backups on a key system would increase the likelihood of malice versus stupidity, since the importance of backups is known to IT staff regardless of role and seniority for only 40 years or so.
ayewo•4mo ago
You were probably thinking of UniSuper [0], an Australian investment company with more than $80B AUM.

Their 3rd party backups with another provider were crucial to helping them undo the damage from the accidental deletion by GCloud.

GCloud eventually shared a post-mortem [1] about what went down.

0: https://news.ycombinator.com/item?id=40304666

1: https://cloud.google.com/blog/products/infrastructure/detail...

laserlight•4mo ago
> Never attribute to malice what can be attributed to stupidity.

Any sufficiently advanced malice is indistinguishable from stupidity.

I don't think there's anything that can't be attributed to stupidity, so the statement is pointless. Besides, it doesn't really matter naming an action stupidity, when the consequences are indistinguishable from that of malice.

FooBarWidget•4mo ago
I mean, I don't disagree that "gross negligence" is a thing. But that's still very different from outright malice. Intent matters. The legal system also makes such a distinction. Punishments differ. If you're a prosecutor, you can't just make the argument that "this negligence is indistinguishable from malice, therefore punish like malice was involved".
withinboredom•4mo ago
I know of one datacenter that burned down because someone took a dump before leaving for the day, the toilet overflowed, then flooded the basement, and eventually started an electrical fire.

I'm not sure you could realistically explain that as anything. Sometimes ... shit happens.

ArchD•4mo ago
Hanlon's Razor is such an overused meme/trope that it's become meaningless.

It's a fallacy to assume that malice is never a form of stupidity/folly. An evil person fails to understand what is truly good because of some kind of folly, e.g. refusing to internally acknowledge the evil consequences of evil actions. There is no clean evil-vs-stupid dichotomy. E.g. is a drunk driver who kill someone with drunk driving stupid or evil? The dangers of drunk driving are well-known, so what about both?

Additionally, we are talking about a system/organization, not a person with a unified will/agenda. There could indeed be an evil person in an organization that wants the organization to do stupid things (not backup properly) in order to be able to hide his misdeeds.

Chris2048•4mo ago
Hanlon's Razor appears to be a maxim of assuming good-faith; "They didn't mean to be cause this, they are just inept."

To me, it has no justification. People see malice easily, granted, but others feign ignorance all the time too.

I think a better principle is: Proven and documented testing for competence, making it clear what a persons duties and (liable) responsibilities are, then thereafter treating incompetence and malice the same. Also: any action need to be audited by a second entity who shares blame (to a measured and pre-decided degree) when they fail to do so.

xorcist•4mo ago
It's also true that "it is difficult to get a man to understand something, when his salary depends on his not understanding it."
thanatos519•4mo ago
Backups should be far away, too. Apparently some companies lost everything on 9/11 because their backups were in the other tower.
zwnow•4mo ago
Funnily enough, Germany has laws for where you are allowed to store backups exactly due to these kinda issues. Fire, flood, earthquake, tornadoes, whatever you name, backups need to be stored with appropriate security in mind.
egorfine•4mo ago
Germany, of course. Like my company needs government permission to store backups.
leipert•4mo ago
More like: your company (or government agency) is critical infrastructure or of a certain size, so there are obligations on how you maintain your records. It’s not like the US or other countries don’t have similar requirements.
Skeime•4mo ago
(Without knowing the precise nature of these laws) I would expect that they don't forbid you to store backups elsewhere. It's just that they mandate that certain types of data be backed up in sufficiently secure and independent locations. If you want to have an additional backup (or backups of data not covered by the law) in a more convenient location, you still can.
egorfine•4mo ago
> sufficiently secure and independent locations

This kind of provision requires enforcement and verification. Thus, a tech spec for the backup procedure. Knowing Germany good enough, I'd say that these tech spec would be detrimental for the actual safety of the backup.

greybeard69•4mo ago
wild speculation and conjecture
egorfine•4mo ago
Agree. It is based on my experience with German bureaucracy.
f1shy•4mo ago
Not wild.

When you live in Germany and are asked to send a FAX (and not a mail, please). Or a digital birth certificate is not accepted until you come with lawyers, or banks not willing to operate with Apple pay, just to name few..

Speculation, yes, but not at all wild

__bjoernd•3mo ago
I'm German and in my 45 years of being so have never been required to send a fax. Snail mail yes, but never a fax.
hdgvhicv•4mo ago
No it doesn’t. It does however need to follow the appropiate standards commensurate with your size and criticality. Feel free to exceed them.
Chris2048•4mo ago
Certain data records need to be legally retained for certain amounts of time; Other sensitive data (e.g. PII) have security requirements.

Why wouldn't government mandate storage requirements given the above?

tooltalk•4mo ago
Some foolishly believed that the twin towers were invincible after the 1993 WTC bombing.

Before 9/11, most DR (disaster recovery) sites were in Jersey City, NJ just across the river from their main offices in WFC or WTC, or roughly 3-5 miles away. After 9/11, the financial industry adopted a 50+ miles rule.

AdamN•4mo ago
Jersey City still was fine and 50 miles can be problematic for certain types of backup (failover) protocols. Regular tape backups would be fine but secondary databases can't be that far away (at least not at the time). I remember my boss at WFC saying that the most traffic over the data lines was in the middle of the night due to backups - not when everybody was in the office.
flumpcakes•4mo ago
Companies big enough will lay the fibre. 50-100 miles of fibre isn't much if you are a billion dollar business. Even companies like BlackRock who had their own datacenters have since taken up Azure. 50 miles latency is negligible, even for databases.
hnlmorg•4mo ago
The WTC attacks were in the 90s and early 00s and back then, 50 miles of latency was anything but negligible and Azure didn’t exist.

I know this because I was working on online systems back then.

I also vividly remember 9/11 and the days that followed. We had a satellite dish with multiple receivers (which wasn’t common back then) so had to run a 3rd party Linux box to descramble the single. We watch 24/7 global news on a crappy 5:4 CRT running Windows ME during the attack. Even in the UK, it was a somber and sobering experience.

osivertsson•4mo ago
Laws of physics hasn't changed since the early 00s though, we could build very low latency point to point links back then too.
mcny•4mo ago
Yes but good luck trying to get funding approval. There is a funny saying that wealthy people don't become wealthy by giving their wealth away. I think it applies to companies even more.
cm2187•4mo ago
Plus long distance was mostly fibre already. And even regular electrical wires aren’t really much slower than fibre in term of latency. Parent probably meant bandwidth.
hnlmorg•4mo ago
Copper doesn't work over these kinds of distances without powered switches, which adds latency. And laying fibre over several miles would be massively expensive. Well outside the realm of all but the largest of corporations. There's a reason buildings with high bandwidth constraints huddle near internet backbones.

What used to happen (and still does as far as I know, but I've been out of the networking game for a while now) is you'd get fibre laid between yourself and your ISP. So you're then subject to the latency of their networking stack. And that becomes a huge problem if you want to do any real-time work like DB replicas.

The only way to do automated off-site backups was via overnight snapshots. And you're then running into the bandwidth constraints of the era.

What most businesses ended up doing was tape backups and then physically driving it to another site -- ideally then storing it an fireproof safe. Only the largest companies could afford to push it over fibre.

namibj•4mo ago
> There's a reason buildings with high bandwidth constraints huddle near internet backbones.

Yeah because interaction latency matters and legacy/already buried fiber is expensive to rent so you might as well put the facility in range of (not-yet-expensive) 20km optics.

> Copper doesn't work over these kinds of distances without powered switches, which adds latency.

You need a retimer, which adds on the order of 5~20 bits of latency.

> And that becomes a huge problem if you want to do any real-time work like DB replicas.

Almost no application would actually require "zero lost data", so you could get away with streaming a WAL or other form of reliably-replayable transaction log and cap it to an acceptable number of milliseconds of data loss window before applying blocking back pressure. Usually it'd be easy to tolerate enough for the around 3 RTTs you'd really want to keep to cover all usual packet loss without triggering back pressure.

Sure, such a setup isn't cheap, but it's (for a long while now) cheaper than manually fixing the data from the day your primary burned down.

StopDisinfo910•4mo ago
To be fair, tape backups are very much ok as a disaster recovery solution. It's cheap once you have the tape drive. Bandwith is mostly fine if you want to read them sequentially. It's easy to store and handle and fairly resistant.

It's "only" poor if you need to restore some files in the middle or want your backup to act as a failover solution to minimise unavailability. But as a last resort solution in case of total destruction, it's pretty much unbeatable cost-wise.

G-Drive was apparently storing less than 1PB of data. That's less than 100 tapes. I guess some files were fairly stable so completely manageable with a dozen of tape drives, delta storage and proper rotation. We are talking of a budget of what 50k$ to 100k$. That's peanuts for a project of this size. Plus the tech has existed for ages and I guess you can find plenty of former data center employees with experience handling this kind of setup. They really have no excuse.

elevation•4mo ago
The suits are stingy when it's not an active emergency. A former employer declined my request for $2K for a second NAS to replicate our company's main data store. This was just days after a harrowing data recovery of critical from a failing WD Green that was never backed up. Once the data was on a RAID mirror and accessible to employees again, there was no active emergency, and the budget dried up.
StopDisinfo910•4mo ago
I don't know. I guess that for all intents and purposes I'm what you would call a suit nowadays. I'm far from a big shot at my admittedly big company but 50k$ is pretty much pocket change on this kind of project. My cloud bill has more yearly fluctuation than that. Next to the cost of employees, it's nothing.
hnlmorg•4mo ago
Switching gear was slower and laying new fibre wasn't an option for your average company. Particularly not point-to-point between your DB server and your replica.

So if real-time synchronization isn't practical, you are then left to do out-of-hours backups and there you start running into bandwidth issues of the time.

peteforde•4mo ago
Never underestimate the potential packet loss of a Concorde filled with DVDs.
dredmorbius•4mo ago
For backups, latency is far less an issue than bandwidth.

Latency is defined by physics (speed of light, through specific conductors or fibres).

Bandwidth is determined by technology, which has advanced markedly in the past 25 years.

Even a quarter century ago, the bandwidth of a station wagon full of tapes was pretty good, even if the latency was high. Physical media transfer to multiple distant points remains a viable back-up strategy should you happen to be bandwidth-constrained in realtime links. The media themselves can be rotated / reused multiple times.

Various cloud service providers have offered such services, effectively a datacentre-in-a-truck, which loads up current data and delivers it, physically, to an off-site or cloud location. A similar current offering from AWS is data transfer terminals: <https://techcrunch.com/2024/12/01/aws-opens-physical-locatio...>. HN discussion: <https://news.ycombinator.com/item?id=42293969>.

Edit to add: from the above HN discussion Amazon retired their "snowmobile" truck-based data transfer service in 2024: <https://www.datacenterdynamics.com/en/news/aws-retires-snowm...>.

hnlmorg•4mo ago
I’ve covered those points already in other responses. It’s probably worth reading them before assuming I don’t know the differences between the most basic of networking terms.

I was also specifically responding to the GPs point about latency for DB replication. For backups, one wouldn’t have used live replication back then (nor even now, outside of a few enterprise edge cases).

Snowmobile and its ilk was a hugely expensive service by the way. I’ve spent a fair amount of time migrating broadcasters and movie studios to AWS and it was always cheaper and less risky to upload petabytes from the data centre than it was to ship HDDs to AWS. So after conversations with our AWS account manager and running the numbers, we always ended up just uploading the stuff ourselves.

I’m sure there was a customer who benefited from such a service, but we had petabytes and it wasn’t us. And anyone I worked with who had larger storage requirements didn’t use vanilla S3, so I can’t see how Snowmobile would have worked for them either.

SEJeff•4mo ago
In the US, dark fiber will run you around 100k / mile. Thats expensive for anyone even if they can afford it. I worked in HFT for 15 years and we had tons of it.
lambdaone•4mo ago
DWDM per-wavelength costs are way, way lower than that, and, with the optional addition of encryption, perfectly secure and fast enough for disk replication for most storage farms. I've been there and done it.
namibj•4mo ago
Assuming that dark fiber is actually dark (without amplifiers/repeaters), I'd wonder how they'd justify the 4 orders of magnitude (99.99%!) profit margin on said fiber. That already includes one order of magnitude between the 12th-of-a-ribbon clad-fiber and opportunistically (when someone already digs the ground up) buried speed pipe with 144-core cable.
SEJeff•4mo ago
Google the term “high frequency trading”
xp84•4mo ago
So that's 5 million bucks for 50 miles? If there are other costs not being accounted for, like paying for the right-of-way that's one thing, but I would think big companies or in this case, a national government, could afford that bill.
SEJeff•4mo ago
Yeah, most large electronic finance companies do this. Lookup “the sniper in mahwah” for some dated but really interesting reading on this game.
palmotea•4mo ago
Before 9/11, most DR (disaster recovery) sites were in Jersey City, NJ just across the river from their main offices in WFC or WTC, or roughly 3-5 miles away. After 9/11, the financial industry adopted a 50+ miles rule.

IIRC, multiple IBM mainframes can be setup so they run and are administered as a single system for DR, but there are distance limits.

linksnapzz•4mo ago
A Geographically-Dispersed Parallel Sysplex for z\OS mainframes, which IBM has been selling since the '90s, can have redundancy out to about 120 miles.

At a former employer, we used a datacenter in East Brunswick NJ that had mainframes in sysplex with partners in lower manhattan.

sllabres•4mo ago
If you have to mirror synchronously the _maximum_ distances for other systems (e.g. storage mirroring with NetApp SnapMirror Synchronous, IBM PPRC, EMC SRDF/S) are all in this range.

But an important factor is, that performance will degrade with every microsecond latency added as the active node for the transaction will have to wait for the acknowledgement of the mirror node (~2*RTT). You can mirror synchronously that distance, but the question is if you can accept the impact.

That's not to say that one shouldn't create a replica in this case. If necessary, synchronize synchronous to a nearby DC and asynchrone to a remote one.

For sure we only know the sad consequences.

linksnapzz•4mo ago
The actual distance involved in the case of the Brunswick DC is closer to 25 miles to Wall St.; but yes, latency for this is always paramount.
ylee•4mo ago
>Some foolishly believed that the twin towers were invincible after the 1993 WTC bombing.

I was told right after the bombing, by someone with a large engineering firm (Schlumberger or Bechtel), that the bombers could have brought the building down had they done it right.

IAmBroom•4mo ago
They deserved to lose everything... except the human lives, of course.

That's like storing lifeboats in the bilge section of the ship, so they won't get damaged by storms.

stego-tech•4mo ago
This. Speaking specifically from the IT side of things, an employer or customer refusing to do backups is the biggest red flag I can get, an immediate warning to run the fuck away before you get blamed for their failure, stego-tech kind of situation.

That being said, I can likely guess where this ends up going:

* Current IT staff and management are almost certainly scapegoated for “allowing this to happen”, despite the program in question (G-DRIVE) existing since 2017 in some capacity.

* Nobody in government will question sufficiently what technical reason is/was given to justify the lack of backups and why that was never addressed, why the system went live with such a glaring oversight, etc, because that would mean holding the actual culprits accountable for mismanagement

* Everyone involved is unlikely to find work again anytime soon once names are bandied about in investigations

* The major cloud providers will likely win several contracts for “temporary services” that in actuality strip the sovereignty the government had in managing its own system, even if they did so poorly

* Other countries will use this to justify outsourcing their own sovereign infrastructure to private enterprise

This whole situation sucks ass because nothing good is likely to come of this, other than maybe a handful of smart teams lead by equally competent managers using this to get better backup resources for themselves.

qzw•4mo ago
I abhor the general trend of governments outsourcing everything to private companies, but in this case, a technologically advanced country’s central government couldn’t even muster up the most basic of IT practices, and as you said, accountability will likely not rest with the people actually responsible for this debacle. Even a nefarious cloud services CEO couldn’t dream up a better sales case for the wholesale outsourcing of such infrastructure in the future.
xp84•4mo ago
I'm with you. It's really sad that this provides such a textbook case of why not to own your own infrastructure.

Practically speaking, I think a lot of what is offered by Microsoft, Google, and the other big companies that are selling into this space is vastly overpriced and way too full of lock-in, taking this stuff in-house without sufficient knowhow and maturity is even more foolish.

It's like not hiring professional truck drivers, but instead of at least people who can basically drive a truck, hiring someone who doesn't even know how to drive a car.

mensetmanusman•4mo ago
If this is true, every government should subsidize competitors in their own country to drive down costs.
dv_dt•4mo ago
One co-effects of the outsourcing strategy is to underfund internal tech teams.. which then makes them less effective in both competing against and managing outsourced capabilities.
TheNewsIsHere•4mo ago
Aside from data sovereignty concerns, I think the best rebuttal to that would be to point out that every major provider contractually disclaims liability for maintaining backups.

Now, sure, there is AWS Backup and Microsoft 365 Backup. Nevertheless, those are backups in the same logical environment.

If you’re a central government, you still need to be maintaining an independent and basically functional backup that you control.

I own a small business of three people and we still run Veeam for 365 and keep backups in multiple clouds, multiple regions, and on disparate hardware.

marcusb•4mo ago
> * Everyone involved is unlikely to find work again anytime soon once names are bandied about in investigations

They might (MIGHT) get fired from their government jobs, but I'll bet they land in consulting shops because of their knowledge of how the government's IT teams operate.

I'll also bet the internal audit team slides out of this completely unscathed.

disgruntledphd2•4mo ago
> I'll also bet the internal audit team slides out of this completely unscathed.

They really, really shouldn't. However, if they were shouted down by management (an unfortunately common experience) then it's on management.

The trouble is that you can either be effective at internal audit or popular, and lots of CAE's choose the wrong option (but then, people like having jobs so I dunno).

Chris2048•4mo ago
Which begs the question, Does N Korea have governmental whistle-blower laws and/or services?

Also, internal audit aren't supposed to be the only audit, they are effectively pre-audit prep for external audit. And the first thing an external auditor should do - ask them probing questions about their systems and process.

mcny•4mo ago
I have never been to DPRK but based on what I've read, I wouldn't even press "report phishing" button in my work email or any task at work I was not absolutely required to do, much less go out of my way to be a whistleblower.
xp84•4mo ago
Wrong Korea, this is South Korea
disgruntledphd2•4mo ago
That's true, but by their nature, external audits are rarer so one would have expected the IA people to have caught this first.
tracker1•4mo ago
Likely it wasn't even (direct) management, but the budgeting handled by politicians and/or political appointees.
Romario77•4mo ago
I mean - it should be part of due diligence of any competent department trying to use this G-drive. If it says there are no backups it means it could only be used as a temporary storage, maybe as a backup destination.

It's negligence all the way, not just with this G-Drive designers, but with customers as well.

tracker1•4mo ago
There's a pretty big possibility it comes down to acquisition and cost saving from politicians in charge of the purse strings. I can all but guarantee that the systems administrators and even technical managers had suggested, recommended and all but begged for the resources for a redundant/backup system in a separate physical location were denied because it would double the expense.

This isn't to preclude major ignorance in terms of those in the technology departments themselves. Having worked in/around govt projects a number of times, you will see some "interesting" opinions and positions. Especially around (mis)understanding security.

MichaelZuo•4mo ago
By definition if one department is given a hard veto, then there will always be a possibility that all the combined work of all other departments can amount to nothing, or even have a net negative impact.

The real question then is more fundamental.

someuser2345•4mo ago
> The issue here is not refusing to use a foreign third party. That makes sense.

For anyone else who's confused, G-Drive means Government Drive, not Google Drive.

saghm•4mo ago
Yeah, the whole supposed benefit of an organization using storage the cloud is to avoid stuff like this from happening. Instead, they managed to make the damage far worse by increasing the amount of data lost by centralizing it.
jstummbillig•4mo ago
It does only make sense if you are competent enough to manage data, and I mean: Any part of it, forever. It's not impossible, of course, but it is really not as trivial as the self-host crowd makes it out to be, if you absolutely need a certain amount of 9s of reliability. There is a reason why AWS etc can exist. I am sure the cloud market is not entirely reasonable but certainly far more reasonable than relying on some mid consultant to do this for you at this scale.
fnordpiglet•4mo ago
The issue is without profit incentive of course it isn’t X (backed up, redundant, highly available, whatever other aspect is optimized away by accountants).

Having worked a great deal inside of aws on these things aws provides literally every conceivable level of customer managed security down to customer owned and keyed datacenters operated by aws, with master key HSMs owned, purchased by the customer, with customer managed key hierarchies at all levels and detailed audit logs of everything done by everything including aws itself. The security assurance of aws is far and away beyond what even the most sophisticated state actor infrastructure does and is more modern to boot - because it’s profit incentive drives that.

Most likely this was not about national security than about nationalism. They’re easily confused but that’s fallacious. And they earned the dividends of fallacious thinking.

arcfour•4mo ago
I very seriously doubt that the US cares about South Korea's deepest, darkest secrets that much, if at all.

Not using a cloud provider is asinine. You can use layered encryption so the expected lifetime of the cryptography is beyond the value of the data...and the US government themselves store data on all 3 of them, to my knowledge.

I say US because the only other major cloud providers I know of are in China, and they do have a vested interest in South Korean data, presumably for NK.

alluro2•4mo ago
It's quite wild to think how US wouldn't want access to their data on a plate, through AWS/GCP/Azure. You must not be aware of the last decade of news when it comes to US and security.
arcfour•4mo ago
The US and South Korea are allies, and SK doesn't have much particular strategic value that I'm aware of? At least not anything they wouldn't already be sharing with the US?

Can you articulate what particular advantages the US would be pursuing by stealing SK secret data (assuming it was not protected sufficiently on AWS/GCP to prevent this, and assuming that platform security features have to be defeated to extract this data—this is a lot of risk from the US's side, to go after this data, if they are found out in this hypothetical, I might add, so "they would steal whatever just to have it" is doubtful to me).

cyphar•4mo ago
The NSA phone-tapped Angela Merkel's phone while she was chancellor as well as her staff and the staff of her predecessor[1], despite the two countries also being close allies. "We are allies, why would they need to spy on us?" is therefore proveably not enough of a reason for the US to not spy on you (let's not forget that the NSA spies on the entire planet's internet communications).

The US also has a secret spy facility in Pine Gap that is believed to (among other things) spy on Australian communications, again despite both countries being very close allies. No Australians know what happens at Pine Gap, so maybe they just sit around knitting all day, but it seems somewhat unlikely.

[1]: https://www.theguardian.com/us-news/2015/jul/08/nsa-tapped-g...

subscribed•3mo ago
Airbus was spied on by NSA For the benefit of Boeing: https://apnews.com/general-news-e88c3d44c2f347b2baa5f2fe508f...

Why do you think USA wouldn't lie, cheat and spy on someone if it had a benefit in it?

speedgoose•4mo ago
Yeah let’s fax all government data to the Trump administration.
PunchyHamster•4mo ago
cloud will also not back up your stuff if you configure it wrong so not sure how's that related
juancb•4mo ago
The simple solution here would have been something like a bunch of netapps with snapmirrors to a secondary backup site.

Or ZFS or DRBD or whatever homegrown or equivalent non-proprietart alternative is available these days and you prefer.

rr808•4mo ago
There is some data privacy requirement in SK where application servers and data have to remain in the country. I worked for a big global bank and we had 4 main instances of our application: Americas, EMEA, Asia and South Korea.
chatmasta•4mo ago
If only there were a second data center in South Korea where they could backup their data…
bradly•4mo ago
When I worked on Apple Maps infra South Korea required all servers be in South Korea.
srj•4mo ago
It was the same at google. If I'm remembering right we couldn't export any vector type data (raster only) and the tiles themselves had to be served out of South Korea.
kumarvvr•4mo ago
> The government official who insisted that commercial AWS/GCP/Azure couldn't possibly be trusted with keeping the information will be keeping their head low for a few days then...

They absolutely cannot be trusted, especially sensitive govt. data. Can you imagine the US state department getting their hands on compromising data on Korean politicians?

Its like handing over the govt. to US interests wholesale.

That they did not choose to keep the backup, and then another, at different physical locations is a valuable lesson, and must lead to even better design the next time.

But the solution is not to keep it in US hands.

waterTanuki•4mo ago
I understand data sovereignty in the case where a foreign entity might cut off access to your data, but this paranoia that storing info under your bed is the safest bet is straight up false. We have post-quantum encryption widely available already. If your fear is that a foreign entity will access your data, you're technologically illiterate.

Obviously no person in a lawmaking position will ever have the patience or foresight to learn about this, but the fact they won't even try is all the more infuriating.

franga2000•4mo ago
Encryption only makes sense if "the cloud" is just a data storage bucket to you. If you run applications in the cloud, you can't have all the data encrypted, especially not all the time. There are some technologies that make this possible, but none are mature enough to run even a small business, let alone a country on.

It sounds technologically illiterate to you because when people say "we can't safely use a foreign cloud" you think they're saying "to store data" and everyone else is thinking at the very least "to store and process data".

Sure, they could have used a cloud provider for encrypted backups, but if they knew how to do proper backups, they wouldn't be in this mess to begin with.

TulliusCicero•4mo ago
> G-Drive’s structure did not allow for external backups

Ha! "Did not allow" my ass. Let me translate:

> We didn't feel like backing anything up or insisting on that functionality.

p0w3n3d•4mo ago
Days? That's optimistic. It depends on what govt cloud contained. For example imagine all the car registrations. Or all the payments for the pension fund
j45•4mo ago
They put everything only in one datacenter. A datacenter located elsewhere should have been setup to mirror.

This has nothing to do with commercial clouds. Commercial clouds are just datacenters. They could pick one commercial cloud data center and not do much more to mirror or backup in different regions. I understand some of the services have inherent backups.

NetMageSCW•4mo ago
Mirroring is not backup.
raxxorraxor•4mo ago
Pretty sensible to not host it on these commercial services. What is not so sensible is to not make backups.
didntknowyou•4mo ago
your first criticism was they should have handed their data sovereignty over to another country?

there are many failure points here, not paying Amazon/Google/Microsoft is hardly the main point.

INTPenis•4mo ago
Dude, the issues go wayyy beyond opting for selfhosting rather than US clouds.

We use selfhosting, but we also test our fire suppression system every year, we have two different DCs, and we use S3 backups out of town.

Whoever runs that IT department needs to be run out of the country.

ubermonkey•4mo ago
I was once advised to measure your backup security in zip codes and time zones.

You have a backup copy of your file, in the same folder? That helps for some "oops" moments, but nothing else.

You have a whole backup DRIVE on your desktop? That's better. Physical failure of the primary device is no longer a danger. But your house could burn down.

You have an alternate backup stored at a trusted friend's house across the street? Better! But what if a major natural disaster happens?

True life, 30+ years ago when I worked for TeleCheck, data was their lifeblood. Every week a systems operator went to Denver, the alternate site, with a briefcase full of backup tapes. TeleCheck was based in Houston, so a major hurricane could've been a major problem.

CarlitosHighway•4mo ago
I mean he's still right about AWS etc. with the current US Administration and probably all that will follow - but that doesn't excuse not keeping backups.
stonemetal12•4mo ago
If you (as the SK government) were going to do a deal with " AWS/GCP/Azure" to run systems for the government, wouldn't you do something like the Jones Act? The datacenters must be within the country and staffed by citizens, etc.
dh2022•4mo ago
Microsoft exec testified that US Govt can get access to the data Azure stores in other countries. I thought this was a wild allegation but apparently is true [0].

[0]https://www.theregister.com/2025/07/25/microsoft_admits_it_c...

lumost•4mo ago
Usually these mandates are made by someone who evaluates “risks.” Third party risks are evaluated under the assumption that everything will be done sensibly in the 1p scenario, to boot, the 1p option will be cheaper as disk drives etc are only a fraction of total cost.

Reality hits later when budget cuts/constrained salaries prevent the maintenance of a competent team. Or the proposed backup system is deemed as excessively risk averse and the money can’t be spared.

harikb•4mo ago
"Not my fault.. I asked them to save everything in G-Drive (Google Drive)"
delfinom•4mo ago
>The government official who insisted that commercial AWS/GCP/Azure couldn't possibly be trusted with keeping the information will be keeping their head low for a few days then...

They can't. The trump admin sanctioning the international criminal court and Microsoft blocking them from all services as a result are proof of why.

redwood•4mo ago
S. Korea has the most backward infosec requirements. It's wild
Frost1x•4mo ago
Having just visited South Korea last year, one thing that sort of caught me off guard was the lack of Google Maps or other major direction system. I wasn’t aware but turns out anything considered “detailed mapping” infrastructure has to be ran stored and on South Korean soil, probably lots of other requirements. So you’re stuck with some shotty local mapping systems that are just bad.

There may be a point in time it made sense but high resolution detailed satellite imagery is plenty accessible and someone could put a road and basically planning structure atop it, especially a foreign nation wishing to invade or whatever they’re protecting against.

Some argument may be made that it would be a heavy lift for North Korea but I don’t buy it, incredibly inconvenient for tourists for no obvious reason.

WhyNotHugo•4mo ago
Several other countries have similar requirements with regards to storing and serving maps locally.

If you take a moment to think about it, what's weird is that so many countries have simply resorted to relying on Google Maps for everyday mapping and navigation needs. This has become such a necessity nowadays that relying on a foreign private corporation for it sounds like a liability.

bmandale•4mo ago
OSM is competitive with google maps in most places. Even if a person uses google maps, its inaccurate to say they "rely" on it when they could fail over to osm if google maps went down.
Avamander•4mo ago
Local mapping efforts and allowing Google Maps to operate aren't mutually exclusive though. I don't see how it's weird that people can choose which map app they use.
Frost1x•4mo ago
Agreed, I would expect a government to provide their own mapping system, independent of any private entity. It’s so critical for a governments operation and general security needs.

What’s odd (to me) is trying to regulate other groups from generating maps of your nation when you have no jurisdiction over them. That’s akin to the US telling all South Korean governments they can’t create maps of the US unless they operate under heavy supervision or something of that nature.

It’s impractical, largely unenforceable, and any nation probably has independent mapping of foreign nations, especially their adversaries, should they need them for conflicts, regardless of what some nation wants to oppose over them in terms of restrictions. I guarantee the US government has highly detailed maps of Korea.

So who exactly are these regulations protecting? In this case they’re just protecting private mapping groups that reside in their country against competition.

jhasse•4mo ago
In my experience Open Street Maps was very good there.
luispauloml•4mo ago
>So you’re stuck with some shotty local mapping systems that are just bad.

What made you think of them as bad? Could you be more specific? I use them almost daily and I find them very good.

guillem_lefait•4mo ago
I was there few months ago and I found them to be quite good too, both in coverage (shops, bus/metro networks) and accuracy. Obviously, not the apps I'm used to so & the language but otherwise, it was okay.
ZephyrBlu•4mo ago
They lack a lot of polish. Functionally they're mostly usable, but some interactions are janky and I found the search to be super hit or miss.
luispauloml•4mo ago
> I found the search to be super hit or miss.

I heard similar complaints from friends that came to visit. But they were using the English version of the apps, which, when I tested, were indeed harder to use, but never a miss for me when I helped them. OTOH, I always find my destinations within the first three options when I search in Korean. So maybe it's subpar internationlization.

> They lack a lot of polish. [...] some interactions are janky

I see. I guess I wouldn't know. It's not janky for me, and I think that I am so used to it that when I need to use Google Maps, or any other, I feel a bit frustrated by the unfamiliar interface that I start wishing I could be using Kakao or Naver Maps instead.

ZephyrBlu•4mo ago
I used both English and Hangul to search. Searching for general things like food was good, but if I was trying to find a specific address it was very difficult. Sometimes it would just return completely wrong garbage. One time I was trying to meet up with someone and only realized halfway that the destination was wrong because Naver decided to take me somewhere else despite me copying the exact address in Hangul.

Maybe more about my unfamiliarity with the Korean address format than anything else tbh.

Some things about Naver I kind of miss from Apple/Google maps, but international software in general feels much more user friendly and better UX than Korean software.

sexy_seedbox•4mo ago
Why didn't you use Kakao Maps or Naver Maps? They're not shotty and work just fine, even if you don't read Korean, you can quickly guess the UI based on the icons.
eredengrin•4mo ago
Agree, Naver maps for navigating public transit in Seoul is excellent. Easier to figure out than public transit in any American city I've been to and I don't read or speak Korean. iirc it even tells the fastest routes/best carriage to be on to optimize transferring between lines.
Frost1x•4mo ago
I tried both and the lack of an English UI made a lot of it non-unintuitive, especially when it came to search and finding local businesses walking around. There were some other annoyances, like when I travel for leisure I enjoy researching an area ahead of time bookmarking places to overlay on a map, and being able to organically explore the area as I move around. I found that very difficult on Naver (I don’t recall the details but I know being able to search for types of businesses in English was part of the issue).

I believe performance wise it was also pretty sluggish from what I remember. I’m by no means saying it was unusable, it got me through somewhat functionally but with a lot of extra effort on my behalf. I also had an international data plan and wasn’t able to see if I could precache the map set vs streaming it as needed over wireless.

I often like to look at restaurants, menus, prices, reviews as well to scope out a place quickly before going there. That process was also tedious (to be fair it could be that I’m not familiar with the UI).

The question is why did I have to use Naver or Kakao in the first place. I’d rather just use the system I already enjoy and am quite proficient with using it, not be forced to play with some new app that I need useful information from for some unclear reason.

tedk-42•4mo ago
A management / risk issue and NOT and engineering one.
gtirloni•4mo ago
wow https://x.com/koryodynasty/status/1973956091638890499

> A senior government official overseeing recovery efforts for South Korea's national network crisis has reportedly died by suicide in Sejong.

covercash•4mo ago
If the US government and corporate executives had even half this level of shame, we'd have nobody left in those positions!
makeitdouble•4mo ago
"suicide" in these circumstances is usually something else altogether.

Even in cases it is executed by themselves, shame won't be the primary motivation.

parineum•4mo ago
It usually isn't but people do usually imply otherwise.
spoaceman7777•4mo ago
You may want to familiarize yourself more with the culture around this in places like South Korea and Japan.
makeitdouble•4mo ago
It can be posed as shame on the front side.

More often than not the suicide covers a whole organization's dirty laundry. You'll have people drunk and driving their cars over cliffs [0], low profile actors ending their life as shit hits the fan [0] etc.

Then some on the lower rank might still end their life to spare their family financially (insurance money) or because they're just so done with it all, which I'd put more on depression than anything.

Us putting it on shame is IMHO looking at it through rose colored glasses and masking the dirtier reality to make it romantic.

[0] https://bunshun.jp/articles/-/76130

[1] https://www.tsukubabank.co.jp/cms/article/a9362e73a19dc0efcf...

raingrove•4mo ago
In Korea, shame often serves as the primary motivator behind high-profile suicides. It's rooted in the cultural concept of "chemyeon (체면)", which imposes immense pressure to maintain a dignified public image.
makeitdouble•4mo ago
Do you have any example of these high profile suicides that can't be better explained as "taking one for the team" for lack of a better idiom.

Shame is a powerful social force throughout the society, but we're talking about systematic screwings more often than not backed by political corruption (letting incompetent entities deal with gov contract on basis of political money and other favors) or straight fraud.

godelski•4mo ago
You should look at the previous president of SK. Maybe a few more too... they frequently land in jail...

I'm not sure Yoon Suk Yeol had any shame

https://en.wikipedia.org/wiki/Impeachment_of_Yoon_Suk_Yeol

covercash•4mo ago
I would also be fine with US politicians and corporate executives landing in jail. At this point, any consequences will be more than they currently face.
godelski•4mo ago
We are a country without kings. No one should be above the law. Those tasked with upholding the law should be held to higher standards. I'm not sure why these are even up for debate
strawhatguy•4mo ago
The weird thing is 13 days later his temporary successor Han was also impeached, basically because he vetoed two bills doing investigations into Yoon. IIRC, the constitutional court wasn’t fully appointed yet. And also apparently, an impeachment is a simple majority in the Assembly, and appears the DPK (the current majority party) has been impeaching everyone they disagree with. My wife, who’s from Korea, says that Lee, the now president, apparently had a “revolutionary” past, and was thrown in jail; and also one justice from the court also had a criminal record.

It’s pretty crazy over there, Lee’s probably safe right now just because his party’s the majority. But it also sounds like they’ve been abusing the impeachment process against the minority party.

godelski•4mo ago

  > My wife, who’s from Korea
Lol, I'm in a similar boat.

  > 13 days later his temporary successor Han was also impeached
Crazier than that![0]

  - Han Duck-soo: Acting president for 13 days. Impeached for refusing to investigate Yoon Suk Yeol and Kim Keon Hee (Yoon's wife). 
    - There were 192 votes against him and 108 members *abstained* from voting. This meant that they failed to form a quorum. *This vote was strictly party lines*
    - They ruled that they only need 50% approval because Han was the Prime Minister. *President needs 2/3rds btw*
  - Choi Sang-mok: was the acting PM for those 13 days. But only serves for 87!
  - 24 March SK's (equivalent to) supreme court overrules Han's impeachment 7-1, and Han once again becomes the acting president. 
So he was impeached after 13 days for trying to bury Yoon's impeachment case, the Conservatives refuse to show up to the hearing, and months later he gets reinstated by the highest court.

  > the DPK has been impeaching everyone they disagree with.
My understanding is that there's kinda a history of this as well as pardoning. Take Park Geun-hye[2] as an example. She was the leader of the GNP (Grand National Party; SK's conservative party), and in December 2016 she was impeached (234 to 56) for influence peddling. Hwang Kyo-ahn (Prime Minister) becomes acting president. In March of 2017, their supreme court upholds the impeachment unanimously, and in May Moon Jae-in (DPK) becomes president. April 2018 Park is sentenced to 24 years in jail, and then is further prosecuted for stealing money from Korea's CIA and interfering in elections. In December 2021 Hwang pardons her and she's back home early 2022.

Before Yoon was Moon Jae-in (DPK), who the GNP tried to impeach in 2019. (Hwang Kyo-ahn was acting after Park's impeachment, who preceded Moon).

Before Park was Lee Myung-bak (GNP). He got 15 years in prison. In 2022 Yoon gave him a pardon.

Before Lee was Roh Moo-hyun (Liberal party) (Goh Kun was in between because...) but was impeached (193 to 2) in 2004 and his supporters were literally fighting people in the assembly. Month later supreme court overturned impeachment. After he left presidency people around him started getting sentenced. In 2009 he threw himself off a cliff as investigations were following him too.

Since the 60's they've had a president exiled, a coup, and even an assassination. It's fucking wild!

And don't get started on chaebols...[3]

[0] https://en.wikipedia.org/wiki/List_of_presidents_of_South_Ko...

[1] https://en.wikipedia.org/wiki/Impeachment_of_Han_Duck-soo

[2] https://en.wikipedia.org/wiki/Park_Geun-hye

[3] https://en.wikipedia.org/wiki/Chaebol

southernplaces7•4mo ago
Not the same country but another example of a culturally similar attitude towards shame over failure: In Japan in 1985, Flight 123, a massive Boeing 747 carrying 524 people, lost control shortly after takeoff from Tokyo en route to Osaka.

The plane's aft pressure bulkhead catastrophically exploded, causing total decompression at the high altitude, severing all four of the massive plane's hydraulic stabilizer systems and entirely tearing away its vertical stabilizer.

With these the 747 basically became uncontrollable and minutes later, despite tremendously heroic efforts by the pilots to turn back and crash land it with some modicum of survivability for themselves and the passengers, the flight slammed into a mountain close to Tokyo, killing hundreds.

The resulting investigation showed that the failed bulkhead had burst open due to faulty repair welding several years before. The two technicians most responsible for clearing that particular shoddy repair both committed suicide soon after the crash tragedy. One of them even left a note specifically stating "With my death I atone". (paraphrasing from memory here)

I can't even begin to imagine a modern Boeing executive or senior staffer doing the same.

Same couldn't be said for Japanese military officials after the tragedy though, so who knows about cultural tendencies:

Right after the crash, helicopters were making ready to fly to the scene (it was night by this point) and a nearby U.S military helicopter squadron also even offered to fly in immediately. The local JSDF administration however stood all these requests down until the following morning, on the claim that such a tremendous crash must not have left anyone alive, so why hurry?

As it turned out, quite a number of people had incredibly survived, and slowly died during the night from exposure to cold and their wounds, according to testimony from the four who did survive to be rescued, and doctors who later conducted postmortems on the bodies.

tra3•4mo ago
What an incredible story. Thanks for sharing.
throwaway290•4mo ago
Happened more recently too https://www.straitstimes.com/asia/east-asia/south-korean-ex-...
southernplaces7•4mo ago
Interesting case too, and that he committed suicide despite not really being blamed from what I just read.

On the other hand you have cases like the MV Wewol ferry disaster, also in South Korea, in which well over 250 passengers died horribly. Most of them were just kids, high school students on a trip. The causes leading up to the tragedy, the accident management by the crew itself and the subsequent rescue, body retrieval and investigation, were absolutely riddled with negligence, incompetence, bad management and all kinds of blame shifting.

The owner of the ferry company itself had an arrest warrant issued for him, then fled and only later was found in a field dead and presumed to have committed suicide.

Underlying all this is that even these apparent cultural ideas of committing suicide to atone for the shame of some gigantic mistake don't seem to prevent people from actually making these kinds of mistakes or doing things more responsibly in the first place.

https://en.wikipedia.org/wiki/Sinking_of_MV_Sewol

veeti•4mo ago
Obligatory long form link: https://admiralcloudberg.medium.com/fire-on-the-mountain-the...
southernplaces7•4mo ago
Wish I'd thought to include it myself!
ookblah•4mo ago
after the kakao fire incident and now this i struggle to understand how they got so advanced in other areas. this is like amateur hour level shit.
pezezin•4mo ago
It is the same in Japan. They are really good for hardware and other "physical" engineering disciplines, but they are terrible when it comes to software and general IT stuff.

Seriously, I work here as an IT guy and I can't stop wondering how they could become so advance in other areas and stay so backwards in anything software-related except videogames.

rester324•4mo ago
Yeah. This is my exact experience too wrt japan! The japanese just somehow can't assess and manage neither the scale, nor the complexity, the risk, the effort or the cost of software projects. Working in japan as a software guy feels like working in a country lagging 30-40 years behind :/
pezezin•4mo ago
In my experience, risk assessment is the worst part. Japanese culture is extremely risk averse, and the moment you ask them to do something that they have never done before, they freeze in panic. They need a procedure for everything, and have a really hard time improvising.

That's why they are good for industrial processes where they can iterate and improve in small, incremental steps, but terrible for software projects full of uncertainties.

Theodores•4mo ago
I was smirking at this until I remembered that I have just one USB stick as my 'backup'. And that was made a long time ago.

Recently I have been thinking about whether we actually need governments, nation states and all of the hubris that goes with it such as new media. Technically this means 'anarchism' with everyone running riot and chaos. But, that is just a big fear, however, the more I think through the 'no government' idea, the less ludicrous it sounds. Much can be devolved to local government, and so much else isn't really needed.

South Korea's government have kind-of deleted themselves and my suspicion is that, although a bad day for some, life will go on and everything will be just fine. In time some might even be relieved that they don't have this vast data store any more. Regardless, it is an interesting story regarding my thoughts regarding the benefits of no government.

poncho_romero•4mo ago
Government is whatever has a monopoly on violence in the area you happen to live. Maybe it’s the South Korean government. Maybe it’s a guy down the street. Whatever the case, it’ll be there.
forinti•4mo ago
What structure could possibly preclude backups? I've never seen anything that couldn't be copied elsewhere.

Maybe it was just convenient to have the possibility of losing everything.

Johnny555•4mo ago
I think that alluded to that earlier in the article:

>However, due to the system’s large-capacity, low-performance storage structure, no external backups were maintained — meaning all data has been permanently lost.

I think they decided that their storage was too slow to allow backups?

Seems hard to believe that they couldn't manage any backups... other sources said they had around 900TB of storage. An LTO-9 tape drive holds ~20TB uncompressed, so they could have backed up the entire system with 45 tapes. At 300MB/sec with a single drive, it would take them a month to complete a full backup, so seems like even a slow storage system should be able to keep up with that rate. They'd have a backup that's always a month out of date, but that seems better than no backup at all.

themafia•4mo ago
Too slow to allow batched backups. Which means you should just make redundant copies at the time of the initial save. Encrypt a copy and send it offsite immediately.

If your storage performance is low then you don't need fat pipes to your external provider either.

They either built this too quickly or there was too much industry corruption perverting the process and the government bought an off the shelf solution that was inadequate for their actual needs.

burnt-resistor•4mo ago
Let's run the numbers:

LTO-9 ~$92/tape in bulk. A 4 drive library with 80 drive capacity costs ~$40k* and can sustain about 1 Gbps. It also needs someone to barcode, inventory, and swap tapes once a week and an off-site vaulting provider like Iron Mountain. That's another $100k/year. Also, that tape library will need to be replaced every 4-7 years, so say 6 years. And those tapes wear out over X uses and sometimes go bad too. It might also require buying a server and/or backup/DR software too. Furthermore, a fire-rated data safe is recommended for about 1-2 weeks' worth of backups and spare media. Budget at least $200k/year for off-site tape backups for a minimal operation. (Let me tell you about the pains of self-destructing SSL2020 AIT-2 Sony drives.)

If backups for other critical services and this were combined, it would probably be cheaper to scale this kind of service rather reinventing the wheel for just one use-case in one department. That would allow for possibly multiple types of optimizations like network-based backups to nearline storage to then be streamed more directly to tape and using many more tape drives, possibly a tape silo robot(s) and perhaps split into 2-3 backup locations obviating the need for off-site vaulting.

Furthermore, it might be simpler, although more expensive, to operate another hot-/warm-site for backups and temporary business continuity restoration using a pile of HDDs and a network connection that's probably faster than that tape library. (Use backups, not replication because replication of errors to other sites is fail.)

Or the easiest option is to use one or more cloud vendors for even more $$$ (build vs. buy tradeoff).

* Traditionally (~20 years ago), enterprise "retail" prices of gear was sold at around 100% markup allowing for up to around 50% discount when negotiated in large orders. Enterprise gear also had a lifecycle of around 4.5 years while it still might technically work, there wouldn't be vendor support or replacements for them, and so enterprise customers are locked into perpetual planned obsolescence consumption cycles.

Johnny555•4mo ago
$500K/year to back up a system used by 750,000 people is $0.66/year. Practically free.

At least now they see the true cost of not having any off site backups. It's a lot more than $0.66 per user.

jiggawatts•4mo ago
A key metric for recovery is the time it takes to read or write an entire drive (or drive array) in full. This is simply a function of the capacity and bandwidth, which has been getting worse and worse as drive capacities increase exponentially, but the throughput hasn't kept up at the same pace.

A typical 2005 era drive from two decades ago might have been 0.5 TB with a throughput of 70 MB/s for a full-drive transfer time (FDTT) of about 2 hours. A modern 32 TB drive is 64x bigger but has a throughput of only 270 MB/s which is less than 4x higher. Hence the FDDT is 33 hours!

This is the optimal scenario, things get worse in modern high-density disk arrays that may have 50 drives in a single enclosure with as little as 8-32 Gbps (1 GB/sec to 4 GB/sec) of effective bandwidth. That can push FDDT times out to many days or even weeks.

I've seen storage arrays where the drive trays were daisy chained, which meant that while the individual ports were fast, the bandwidth per drive would drop precipitously as capacity was expanded.

It's a very easy mistake to just keep buying more drives, plugging them in, and never going back to the whiteboard to rethink the HA/DR architecture and timings. The team doing this kind of BAU upgrade/maintenance is not the team that designed the thing originally!

summerlight•4mo ago
Basically it all boils down to budget. Those engineers knew this is a problem and wanted to fix that but that costs some money. And you know, bean counters in the treasury are basically like, "well it works well, why do we need that fix?" and the last conservative govt. was in a full spending cut mode. You know what happened there.
rasz•4mo ago
Its Korea, so most likely fear of annoying higher up when seeking approvals.

Koreans are weird, for example they will rather eat contractual penalty than report problems to the boss.

r0ckarong•4mo ago
My guess is someone somewhere is very satisfied that this data is now unrecoverable.
gritzko•4mo ago
In a world where data centers burn and cables get severed physically, the entire landscape of tradeoffs is different.
fijiaarone•4mo ago
What info needed to be destroyed and who did it implicate?
crmd•4mo ago
I would love to know how a fire of this magnitude could happen in a modern data center.
AnimalMuppet•4mo ago
Allegedly from replacing batteries.
esskay•4mo ago
Often poor planning or just lithium based batteries far too close to the physical servers.

OVH's massive fire a couple of years ago in one of the most modern DC's at the time was a prime example of just how wrong it can go.

bell-cot•4mo ago
Assume the PHB's who wouldn't spring for off-site backups (vs. excuses are "free") also wouldn't spring for fire walls, decently-trained staff, or other basics of physical security.
filloooo•4mo ago
Their decade-old NMC li-ion UPSs were placed 60cm away from the server racks.
BonoboIO•4mo ago
Easy, some electronic fault and look at OVH with WOODEN FLOOR and bad management decisions. But of course the servers had automatic backups … in the datacenter in the same building. A few companies lost EVERYTHING and had to close, because of this.
nullable_bool•4mo ago
I like to think that at least one worker was loafing on a project that was due the next day and there was no way it was going to get done. Their job was riding on it. They got drunk to embrace the doom that faces them, only to wake up with this news. Free to loaf another day!
kupopuffs•4mo ago
just his luck
mekoka•4mo ago
This is wild. Wilder would be to see that the government runs the same after the loss.
WiggleGuy•4mo ago
I was in Korea during the Kakao fire incident and thought it was astounding that they had no failovers. However, I thought it'd be a wake up call!

I guess not.

HeavyStorm•4mo ago
Well I'll be. Backup is a discipline to not be taken lightly by any organization, specially a government. Fire? This is backup 101: files should be backed up and copies should be physically apart to avoid losing data.

There are some in this threading pointing out that this would be handled by cloud providers. That bad - you can't hope for transparent backup, you need to actively have a discipline over it.

My fear is that our profession has become very amateurish over the past decade and a lot of people are vulnerable to this kind of threat.

creakingstairs•4mo ago
One of the workers jumped off a building. [1] They say the person was not being investigated for the incident. But I can’t help but think he was a put under intense pressure to be scapegoat for how fucked up Korea can be in situations like this.

To be some context on Korea IT scene, you get pretty good pay and benefits if you work for a big product company, but will be treated like dogshit inside subcontracting hell if you work anywhere else.

[1] https://www.hani.co.kr/arti/society/society_general/1222145....

jiggawatts•4mo ago
I was the principal consultant at a subcontractor to a contractor for a large state government IT consolidation project, working on (among other things) the data centre design. This included the storage system.

I noticed that someone had daisy-chained petabytes of disk through relatively slow ports and hadn’t enabled the site-to-site replication that they had the hardware for! They had the dark fibre, the long-range SFPs, they even licensed the HA replication feature from the storage array vendor.

I figured that in a disaster just like this, the time to recover from the tape backups — assuming they were rotated off site, which might not have been the case — would have been six to eight weeks minimum, during which a huge chunk of the government would be down. A war might be less disruptive.

I raised a stink and insisted that the drives be rearranged with higher bandwidth and that the site-to-site replication be turned on.

I was a screamed at. I was called unprofessional. “Not a team player.” Several people tried to get me fired.

At one point this all culminated in a meeting where the lead architect stood up in front of dozens of people and calmly told everyone to understand one critical aspect of his beautiful design: No hardware replication!!!

(Remember: they had paid for hardware replication! The kit had arrived! The licenses were installed!)

I was younger and brave enough to put my hand up and ask “why?”

The screeched reply was: The on-prem architecture must be “cloud compatible”. To clarify: He thought that hardware-replicated data couldn’t be replicated to the cloud in the future.

This was some of the dumbest shit I had ever heard in my life, but there you go: decision made.

This. This is how disasters like the one in South Korea happen.

In private organisations you get competent shouty people at the top insisting on a job done right. In government you get incompetent shouty people insisting that the job gets done wrong.

Swenrekcah•4mo ago
> In private organisations you get competent shouty people at the top insisting on a job done right. In government you get incompetent shouty people insisting that the job gets done wrong.

Great post and story but this conclusion is questionable. These kinds of incompetences or misaligned incentives absolutely happen in private organisations as well.

jiggawatts•4mo ago
Much more rarely in my experience, having been at both kinds of organisations.

There’s a sort-of “gradient descent” optimisation in private organisations, established by the profit motive and the competitors nipping at their heels. There’s no such gradient in government, it’s just “flat”. Promotions hence have a much weaker correlation with competence and a stronger correlation with nepotism, political skill, and willingness to participate in corruption.

I’ve worked with may senior leaders in all kinds of organisations, but only in government will you find someone who is functionally illiterate and innumerate in a position of significant power.

Obviously this is just a statistical bias, so there’s overlap and outliers. Large, established monopoly corporations can be nigh indistinguishable from a government agency.

foofoo12•4mo ago
This is extraordinarily loony shit. Someone designed a system like this without backups? Someone authorized it's use? Someone didn't scream and yell that this was bat and apeshit wacky level crazy? Since 2018? Christ almighty.
hopelite•4mo ago
Does anyone have an understanding of what the impact will be of this, i.e., what kind of government impact scale and type of data are we talking about here?

Is this going to have a real impact in the near term? What kind of data are we’re talking about being permanently lost?

spawarotti•4mo ago
There are two types of people: those who do backups, and those who will do backups.
mmaunder•4mo ago
Are we talking about actual portable thunderbolt 3 connected RAID 5 G-drive arrays with between 70 and 160TB of storage per array? We use that for film shoots to dump TB of raw footage. That G-Drive?? The math checks at 30GB for around 3000 users on a single RAID5 array. This would be truly hilarious if true.
ChuckMcM•4mo ago
Article comments aside, it is entirely unclear to me whether or not there was no backups. Certainly no "external" backups, but potentially "internal" backups. My thinking is that not actually allowing backups and forcing all data there creates a prime target for the PRK folks right? I've been in low level national defense meetings about security where things like "you cannot backup off site" are discussed but there are often fire vaults[1] on site which are designed to withstand destruction of the facility by explosive force (aka a bomb) or fire or flood Etc.

That said, people do make bad calls, and this would be an epically bad one, if they really don't have any form of backup.

[1] These days creating such a facility for archiving an exabyte of essentially write mostly data are quite feasible. See this paper from nearly 20 years ago: https://research.ibm.com/publications/ibm-intelligent-bricks...

acchow•4mo ago
They did have backups. But the backups were also destroyed in the same fire.
derleyici•4mo ago
Then it's just incompetence. Even I have my backup server 100 km away from the master one.
jedimastert•4mo ago
> My thinking is that not actually allowing backups and forcing all data there creates a prime target for the PRK folks right?

It's funny that you mention that...

https://phrack.org/issues/72/7_md#article

ChuckMcM•4mo ago
Ouch
kwhitefoot•4mo ago
> there are often fire vaults[

Many years ago I was Unix sysadmin responsible for backups and that is exactly what we did. Once a week we rotated the backup tapes taking the oldest out of the fire safe and putting the newest in. The fire safe was in a different building.

I thought that this was quite a normal practice.

nowittyusername•4mo ago
I must say, at least for me personally when I hear about such levels of incompetence it rings alarm bells in my head making me think that maybe intentional malice was involved. Like someone higher up had set up the whole thing to happen in such a matter because there was a benefit to this happening we are unaware of. I think this belief maybe stems from lack of imagination on how really stupid humans can get.
quantumsequoia•4mo ago
Most people overestimate the prevalence of malice, und underestimate the prevalence of incompetence
mliezun•4mo ago
What do you make of this? The guy who was in charge of restoring the system was found dead

https://www.thestar.com.my/aseanplus/aseanplus-news/2025/10/...

BizarroLand•4mo ago
My guess would be that either he felt it was such a monumental cockup that he had to off himself or his bosses thought it was such a monumental cockup that they had to off him.
vayup•4mo ago
A lot of folks are arguing that the real problem is that they refused to use US cloud providers. No, that's not the issue. It's a perfectly reasonable choice to build your own storage infrastructure if it is needed.

But the problem is they sacrificed "Availability" in pursuit of security and privacy. Losing your data to natural and man-made disasters is one of the biggest risks facing any storage infrastructure. Any system that cannot protect your data against those should never be deployed.

"The Interior Ministry explained that while most systems at the Daejeon data center are backed up daily to separate equipment within the same center and to a physically remote backup facility, the G-Drive’s structure did not allow for external backups."

This is not a surprise to them. They had knowingly accepted the risk of infrastructure being destroyed by natural and man-made disasters. I mean, WTF!

TulliusCicero•4mo ago
Yeah, it's such a lame excuse to say "did not allow for external backups", as if that's a reasonable choice that they just couldn't work around.

South Korea isn't some poor backwater, they have tech companies and expertise, that they were "unable" to do backups was an intentional choice.

reed1234•4mo ago
Durability is more precise than availability in this context because it is about the data surviving (not avoiding downtime).
john-tells-all•4mo ago
This is literally comic. The plot of the live action comic book movie "Danger: Diabolik" [1] has a segment where the a country's tax records are destroyed, thus making it impossible for the government to collect taxes from its citizens.

[1] https://en.wikipedia.org/wiki/Danger:_Diabolik

725686•4mo ago
In my twenties I worked for a "company" in Mexico that was the official QNX ditribuitor for Mexico and LatAm. I guess the only reason was that Mexico City's Metro used QNX, and every year they bought a new license, I don't know why. We also did a couple of sales in Colombia I think, but was a complete shit show. We really just sent them the software by mail, and they had all sorts of issues getting it out of customs. I did get to go to a QNX training in Canada, which was really cool. Never got to use it though.
_kst_•4mo ago
I think you meant to post this comment here: https://news.ycombinator.com/item?id=45481892
725686•4mo ago
Hmm, yes...I don't see how to move or remove, so...sorry for that
biglyburrito•4mo ago
"A source from the Ministry of the Interior and Safety said, “The G-Drive couldn’t have a backup system due to its large capacity” "

:facepalm:

throwaway2037•4mo ago
At the very bottom of the article, I see this notice:

    > This article was originally written in Korean and translated by a bilingual reporter with the help of generative AI tools. It was then edited by a native English-speaking editor. All AI-assisted translations are reviewed and refined by our newsroom.
I like that. It is direct and honest. I'm fine with people using LLMs for natural language related work, as long as they are transparent about it.
BlueTemplar•4mo ago
It's still worse than useless :

https://www.bloodinthemachine.com/p/ai-killed-my-job-transla...

positron26•4mo ago
I really don't get this take where people try to downplay AI the most where it is obviously having the most impact. Sure. A billion people are supposed to go back to awful machine translation so that a few tens of thousands can have jobs that were already commodity.
schrodinger•4mo ago
I have sympathy for those affected but this article is disingenuous. I speak Spanish and have just gone to 3 or 4 Spanish news sites, and passed their articles through to ChatGPT to translate "faithfully and literally, maintaining everything including the original tone."

First it gave a "verbatim, literal English translation" and then asked me if I would like "a version that reads naturally in English (but still faithful to the tone and details), or do you want to keep this purely literal one?"

Honestly, the English translation was perfect. I know Spanish, I knew the topic of the article and had read about it in the NYTimes and other English sources, and I am a native English speaker. It's sad, but you can't put the toothpaste back in the tube. LLMs can translate well, and the article saying otherwise is just not being intellectually honest.

alsetmusic•4mo ago
What isn't tested here, and what I can't test myself as a mono-linguist, is how well english is translated to other languages. I'm sure it's passable, but I absolutely expect it to be less sufficient because most of the people working on this live in the USA / speak english and work the most on that.

I want to know how it holds up translating Spanish to Farsi, for example.

samus•4mo ago
I see a high risk of idioms getting butchered. It's usually a good idea to translate into English and fix that up first. And unless a native-language editor revises it, there might be sentence structures that feel unnatural in the target language.

A classic issue is dealing with things like wordplay. Good bilingual editors might be able to get across the intended meaning in other ways, but I highly doubt translation software is capable of even recognizing it.

kace91•3mo ago
Spanish is probably the most likely language to be succesful (due to the amount of spanish speakers in the US). Still English to Spanish, while passable, is very clearly not something that passes for native speech.

Funnily enough, I'd say it reads like most of my American friends here in Spain - the best way I can put it is, it's fluid spanish from a brain that is working natively in English and translating on the fly, rather than a mind thinking in Spanish.

This is obvious to me because I speak both languages, so I can trace back in my mind the original, native English phrase that resulted in a specific weird spanish expression. a Spanish monolingual can probably only tell that it doesn't sound native.

The important point though, is that there is no significant loss of meaning other than the text being annoying to read. it won't work for literature but it's perfectly serviceable for pragmatic needs.

Marsymars•4mo ago
This is how I’ve done translation for a number of years, even pre-LLM, between the languages I speak natively - machine translation is good enough that it’s faster for me to fix its problems than for me to do it from scratch.

(Whether machine translation uses LLMs or not doesn’t seem especially relevant to the workflow.)

alsetmusic•4mo ago
My partner is a pro-democracy fighter for her country of origin (she went to prison for it). She used to translate english articles of interest to her native language for all the fellow-exiles from her country. I showed her Google translate and it blew her mind how much work it did for her. All she had to do was review it and clean it up.

The AI hype train is bs, but there're real and concrete uses for it if you don't expect it to become a super-intelligence.

antonvs•4mo ago
> The AI hype train is bs, but there're real and concrete uses for it

When you consider that there are real and concrete uses for it across a wide variety of domains, the hype starts to make more sense.

Obviously Sam “we’ll build a Dyson sphere with it” is off in hype lala land somewhere while he tries to raise a trillion dollars to burn through fossil fuels as fast as possible, but that’s a kind of symptom of the real underlying capabilities and promise here.

throwaway2037•4mo ago

    > The AI hype train is bs, but there're real and concrete uses for it if you don't expect it to become a super-intelligence.
I agree 100% with this sentiment. Another good use case: Ask an LLM to summarize a large document. Again, not super-intelligence, but can be a big timesaver to reduce "intern work". I have heard some people have a LLM plug-in to their Microsoft Outlook (Exchange) that allows them to summarize an email thread. Again, not perfect, but helps to reduce cognitive load. Another practical example: Using an LLM with conference calls to transcribe meeting notes and provide a summary. Then you can review the summary, fix any obvious errors, and send by email to participants.
AnotherGoodName•4mo ago
Especially since LLM tech was originally developed for translation. That’s the original reason so much work was done to create a model that could handle context and it turned out that was helpful in more areas than just translation.

While LLM usage is just spinning up in other areas, for translation they have been doing this job well for over 5 years now.

fragmede•4mo ago
Specifically, GNMT came out in 2016, which is 9 years ago.

GNMT used seq2seq with attention to do translations. GNMT plus some RNN and attention lead to transformers, and here we are today.

refulgentis•4mo ago
> While LLM usage is just spinning up in other areas,

Oh?

ants_everywhere•4mo ago
> I'm fine with people using LLMs for natural language related work

phew I'm relieved you're okay with people using modern tools to get their job done

beefnugs•4mo ago
So just a blanket message at the bottom of the page "anything and everyone you read here might be total bullshit"
saghm•4mo ago
FWIW that happens sometimes with traditional reporting to. At the end of the day, it's just a matter of degree, and to be truly informed you need to need to be willing to question the accuracy of your sources. As the parent comment said, at least they're being transparent, which isn't even always the case for traditional reporting
_heimdall•4mo ago
That footnote does make me question the bilingual reporter's skills in both languages though. If the reporter needs an LLM to help translate they could easily be missing subtle mistranslations.

The final note that all AI-assisted translations are reviewed by the newsroom is also interesting. If they are going to take the time to review it and have enough experience in both languages to verify the translation, why use the LLM for it at all?

highwind•4mo ago
> That footnote does make me question the bilingual reporter's skills in both languages though. If the reporter needs an LLM to help translate they could easily be missing subtle mistranslations.

I've done my fair share of translating as a bilingual person and having an LLM to do a first pass at translation saves TON of time. I don't "need" LLM, but it's definitely a helpful tool.

spacechild1•4mo ago
The reporter does not need the LLM, but it's often faster to review/edit a machine translation than doing the whole translation by yourself
drnick1•4mo ago
> If they are going to take the time to review it and have enough experience in both languages to verify the translation, why use the LLM for it at all?

People generally read (and make minor edits if necessary) much faster than they can write.

vineyardmike•4mo ago
> It was then edited by a native English-speaking editor.

Two different editors.

But as others mentioned, this is helpful even for the same editor to do.

phantomathkg•4mo ago
If using LLM can shorten the time reporter needs to rewrite the whole article again in the language the reporter is fluent but take effort to write, why not?

This will give the reporter more time to work on more articles, and we as a foreigner to Korea, getting more authentic Korean news that is reviewed by Korean and not be Google Translate.

throwaway2037•4mo ago

    > If the reporter needs an LLM to help translate they could easily be missing subtle mistranslations.
You raise an interesting point about "missing subtle mistranslations". Consider the stakes for this article: This highly factual news reporting. There are unlikely to be complex or subtle grammar. However, if translating an interview, this stakes are higher, as people use many idiomatic expressions when speaking their native language. Thinking deeper: The highest stakes (culturally) that I can think of is translating novels. They are full of subtle meanings.
hackernewds•4mo ago
why would you not be fine about it?
Diti•4mo ago
You probably don’t want to read news websites which are nothing but LLM output without a journalist reviewing the articles. Unless you’re a fan of conspiracy theories or ultra-aligned content.
aspenmayer•4mo ago
Case in point:

A New Gaza Rage Machine–With Polish Origins - https://news.ycombinator.com/item?id=45453533

lenkite•4mo ago
As long as the LLM doesn't hallucinate stuff when translating, by generating text that is inaccurate or even completely fabricated.
Awesomedonut•4mo ago
Not even one redundant backup? That's unimaginable for me
UltraSane•4mo ago
This is amazingly incompetent because all the major enterprise storage arrays support automatic replication to remote arrays.
hero4hire•4mo ago
In 2025 data storage used by nation states, exposed to the internet, has no backups.

No offsite backups. No onsite backups. No usb drives laying around unsecure in a closet. Nothing.

What?

anal_reactor•4mo ago
> no back-ups

Top fucking kek. What were they expecting to happen? Like, really? What were they thinking?

3eb7988a1663•4mo ago
While I am sure a huge portion of valuable work will be lost, I am smirking thinking of management making a call, "So, if there is any shadow IT who has been running mirror databases of valuable infrastructure, we would have a no questions asked policy on sharing that right now".

I know that I have had to keep informal copies of valuable systems because the real source of truth is continually patched,offline,churn,whatever.

audiodude•4mo ago
Reminds me when Toy Story 2 was deleted and they found the backups on an artist's laptop that was working from home.
Blahagun•4mo ago
It was on their SGI workstation that they lugged to home, but yeah, pretty much that's how they recovered most of the files. At the end they barely used the material.
tropdrop•4mo ago
>artist

technically, it was the supervising technical director.

The only reason this happened (I don't think "working from home" was very common in 1999) was because she just had a baby! I love this story because it feels like good karma – management providing special accommodations for a new mom saves the show.

hobofan•4mo ago
If SK is anything similar to Germany or Japan in how they are digitizing their government processes, you'll probably be able to find paper printouts of all the data that was lost.
ThePowerOfFuet•4mo ago
The fun part will be finding them, figuring out their relevance, and re-digitizing them in a useful form.
3eb7988a1663•4mo ago
The extra fun will be if they can find multiple copies of unknown provenance. Who wins?

On the other hand, I hope a few boots on the ground get to use this as a chance to toss decades of bad technical debt. "Why are we still running that 2011 Oracle database version?".

sgammon•4mo ago
> There is a cert and private key for rc.kt.co.kr, South Korea Telecom's Remote Control Service. It runs remote support backend from https://www.rsupport.com. Kim may have access to any company that Korea Telecom was providing remote support for.
efitz•4mo ago
The lack of backups makes my blood boil. However, from my own experience, I want to know more before I assign blame.

The very first "computer guy" job I had starting in about 1990/1991, my mentor gave me a piece of advice that I remember to this day: "Your job is to make sure the backups are working; everything else is gravy."

While I worked in that job, we outgrew the tape backup system we were using, so I started replicating critical data between our two sites (using 14400 bps Shiva NetModems), and every month I'd write a memo requesting a working backup system and explaining the situation. Business was too cheap to buy it.

We had a hard drive failure on one of our servers, I requested permission to invalidate the drive's warranty because I was pretty sure it was a bad bearing; I got it working for a few weeks by opening the case and spinning the platter with my finger to get it started. I made sure a manager was present so that they could understand how wack the situation was- they bought me a new drive but not the extras that I asked for, in order to mirror.

After I left that job, a friend of mine called me a month later and told me that they had a server failure and were trying to blame the lack of backups on me; fortunately my successor found my stack of memos.

LorenPechtel•4mo ago
Yeah. I've seen it. Had one very close call. The thieves took an awful lot of stuff, including the backups, had they taken the next box off the server room rack the company would have been destroyed. They stole one of our trucks (which probably means it was an inside job) and appear to have worked their way through the building, becoming more selective as they progressed. We are guessing they filled the truck and left.

Did anything change? No.

nicbou•4mo ago
> fortunately my successor found my stack of memos

Those, ironically, were backed up

lettergram•4mo ago
The irony -- so not only was their system hacked ("hosted onsite"), but then it was also burned down onsite with no backups.

In other words.. there was no point in the extra security of being onsite AND the risks of being onsite single failure point destroyed any evidence.

Pretty much what I'd expect tbh, but no remote backup is insane.

agnishom•4mo ago
They were using a private service to manage public infrastructure? One developed by a foreign company?
vimredo•4mo ago
The G in G-Drive stands for Government, not Google. It tricked me too.
bane•4mo ago
Goodness, I have over 100TB at home and it cost less than a two or three thousand dollars to put in place. That's like $25 per TB.

> The stored data amounts to 858TB (terabytes), equivalent to 449.5 billion A4 sheets.

No, the 858TB amounts to under $25k for the government of the 10th largest economy, of one of the most sophisticated countries on the planet, to put in place.

Two of those would be less than the price of a new Hyundai Grandeur car.

> “It’s daunting as eight years’ worth of work materials have completely disappeared.”

So they're clocking in at around 100TB/year or 280GB a day. It's respectable, but not crazy. It's about 12GB/hr, doable with professional, server level hardware with backup moved over dedicated fiber to an offsite location. Multiply the price 10x and you can SSD the entire thing.

Even with data sovereignty consideration demanding an entirely 100% home grown solution rather than turning to AWS or Azure, there's no excuse. But it's not like the cloud providers don't already have CSAP certification and local, in country, sovereign clouds [1] with multiple geographic locations in country [2]

South Korea is full of granite mountains, maybe its time the government converts one into an offsite, redundant backup vault?

1 - https://erp.today/south-korea-microsoft-azure-first-hypersca...

2 - https://learn.microsoft.com/en-us/azure/reliability/regions-...

jart•4mo ago
The most sophisticated countries and companies are smart enough to use the least sophisticated backup methods. SK needs to backup their data to cassette tapes and tape libraries cost a bit more than that, but not much. Even if they boat their tapes over to an iron mountain in the US, I can't imagine the equipment and service fees are going to cost them more than a few hundred grand. They'll be spending more on the headcount to manage the thing.
hedora•4mo ago
The operational expenses of this stuff dwarfs the hardware cost. For the tape mountain, you need robots to confirm the tapes still work (mean time to detection of device failure and recovery are key for RAID durability computations). So, someone needs to constantly repair the robots or whatever.

If I was being paid to manage that data set, I’d probably find two enterprise storage vendors, and stick two copies of the data set on them, each with primary secondary backup. Enterprise flash has been under a dollar a gigabyte for over a decade, so that’s under $1.7M per copy, amortized over five years. That’s $700K per year, and one of the four copies (at 3-4 sites) could be the primary store.

(I can’t be bothered to look up current prices, but moore’s law says there have been six capacity doublings since then, and it still applies to flash and networking, so divide my estimate by 2^6 — so, ten-ish grand per year, with zero full time babysitters required).

chii•4mo ago
even with dual vendors, you'd have to still put in place a backup/restore procedures (with the associated software, which may need to be custom). Then you'd need regular testing. These operational concerns will basically double the cost yearly, probably.
Maxion•4mo ago
You'll need permanent staff to oversee this, too. This will add at another ~500k+ to your annual expenditure.
samus•4mo ago
The article reads like they actually have a fault-tolerant system to store their data. This is probably a data dump for whatever files they are working with that might have started out as a cobbled-together prototype that just picked up momentum and pushed beyond its limitations. Many such cases not only in government IT...
bane•4mo ago
Looking at the article, my read (which could be wrong) is that the backup was in the same room as the original.
lazyasciiart•4mo ago
No. It says that "most systems" in this data center are backed up to separate hardware on a different floor, and then a backup is made at a physically remote location. This particular G-Drive system was not on the standard backup process - it sounds like it was much higher volume than any others, so maybe they couldn't use it. They did have a pilot going to get G-Drive backed up...it was supposed to be scaled up to the whole thing in December.
xvector•4mo ago
Wow, rough timing.
yrcyk•4mo ago
From what I'm seeing that pilot is about NTOPS (National Total Operation Platform System) which they use to manage everything, and since this was on the floor above the fire it made recovery a lot more complicated

There's a high chance i'm missing something though, where did you read about a G-Drive backup system?

lazyasciiart•4mo ago
G-Drive is the name of their bespoke file storage system that was taken down by the fire.
WalterBright•4mo ago
You can buy a 24Tb drive on sale for $240 or so.

Sometimes I wonder why I still try and save disk space :-/

DaiPlusPlus•4mo ago
Link? Am both curious and skeptical
bedstefar•4mo ago
Not trying to account for the parent's claim, but generally check diskprices.com for the latest deals on Amazon (.com, .co.uk, .de, .es, .it etc)
WalterBright•4mo ago
Newegg regularly offers sales on hard drives, which is when I buy.
import•4mo ago
Very possible with sales and especially non nas grade harddisks
joshvm•4mo ago
Seagate Expansion drives are in this price range and can be shucked. They're not enterprise drives meant for constant operation, the big ones are Barracudas or maybe Exos, but for homelab NAS they're very popular.
golem14•4mo ago
I have such a NAS for 8 years, (and a smaller netgear one from maybe 16 years ago), and have yet such a disk fail. But you can get unlucky, buying a supposedly new but "refurbished" item via amazon or the seagate store (so I hear), or have the equivalent of the "death star" HDDs, which had a ridicilously high burnout rate (we measured something like > 10% of the drives failed every week across a fairly large deployhment in the field - major bummer.

If you use such consumer drives, I strongly suggest to make occasional offsite backups of large mostly static files (movies for most people I guess), and frequent backups of more volatile directories to an offsite place, maybe encrypted in the cloud.

WalterBright•4mo ago
Only a fool would have 24Tb of data and entrust it to a single drive. Of course you buy more than one.
golem14•4mo ago
Of course you would stagger the offline backups. But if we are talking storing e.g. movies, the worst case scenario is really not so bad (unless you have the last extant copies of some early Dr Who episodes, then BBC would want to have a word with you)
ppg_hero•4mo ago
https://pricepergig.com/us?minCapacity=24000 shows the cheapest 24TB drive is $269.99 right now, so yeah, with a sale you'll get to $240. But if you're ok with smaller drives, you can get a much better price per gig ratio
Nifty3929•4mo ago
~1PB of data, with ingestion at a rate of 12GB per hour, is a tiny amount of data to manage and backup properly for a developed world government. This is silly. Volume clearly should not have been a hinderance.

Backup operations are often complex and difficult - but then again it's been worked on for decades and rigorous protocols exist which can and should be adopted.

"However, due to the system’s large-capacity, low-performance storage structure, no external backups were maintained" ... "the G-Drive’s structure did not allow for external backups."

Clearly [in]competence was the single factor here.

This is what happens when you come up with all kind of reasons to do something yourself, which you are not qualified to do, rather than simply paying a vendor to do it for you.

lloeki•4mo ago
> Backup operations are often complex and difficult

It quickly becomes much less so if you satisfy yourself with very crude methods.

Sure that would be an imperfect backup in many ways but any imperfect backup is always infinitely better than no backup at all.

Den_VR•4mo ago
But it would not have been $25k, it would have been 1-2 million for an “enterprise grade” storage solution from Dell or a competitor. Which isn’t much compared with your granite mountain proposal, nor with the wages of 750,000 civil servants, but it’s a lot more than $25k.
bestham•4mo ago
Azure can only be sovereign to the USA.[1] [2]

[1]: https://www.computerweekly.com/news/366629871/Microsoft-refu... [2]: https://lcrdc.co.uk/industry-news/microsoft-admits-no-guaran...

leentee•4mo ago
No backup, no replica? Such a shame.
fredsmith219•4mo ago
Wow. Maybe backups would have been a good idea.
BonoboIO•4mo ago
Look at OVH a few years … they had backups in the same datacenter.

https://www.datacenterdynamics.com/en/news/ovhcloud-fire-rep...

yongjik•4mo ago
I see some comments about North Korean hacking, so I feel I need to clear up some misconceptions.

First, (as you guys have seen) South Korea's IT security track record is not great. Many high-profile commercial sites have been hacked. If a government site was hacked by North Korea, it won't be the first, and while it would be another source of political bickering and finger-pointing, it's likely to blow over in a month.

In fact, given that SK's president Lee started his term in June after his predecessor Yoon's disastrous attempt at overthrowing the constitution, Lee could easily frame this as a proof of the Yoon admin's incompetence.

But deliberately setting fire on a government data center? Now that's a career ending move. If that's found out, someone's going to prison for the rest of their life. Someone should be really desperate to attempt that kind of thing. But what thing could be so horrible that they would rather risk everything to burn the evidence? Merely "we got hacked by North Korea" doesn't cut it.

Which brings us to the method. A bunch of old lithium batteries, overdue for replacement, and predictably the job was sold to the lowest bidder - and the police knows the identity of everyone involved in the job and is questioning them as we speak.

So if you are the evil perpetrator, either you bribed one of the lowest wage workers to start a fire (and the guy is being questioned right now), or you just hoped one of the age-old batteries would randomly start fire. Neither sounds like a good plan.

Which brings us to the question "Why do people consider that plausible?" And that's a doozy.

Did I mention that President Yoon almost started a coup and got kicked out? Among the countless stupid things he did, he somehow got hooked up on election conspiracy theories that say that South Korea's election commission was infiltrated by Chinese spies (along with major political parties, newspapers, courts, schools, and everything) and they cooked the numbers to make the (then incumbent) People's Power Party to lose congressional election of 2024.

Of course, the theory breaks down the moment you look close. If Chinese spies had that much power, how come they let Yoon win his own election in 2022? Never mind that South Korea uses paper ballots and every ballot and every voting place is counted under the watch of representatives from multiple parties. To change numbers in one counting place, you'll have to bribe at least a dozen people. Good luck doing that at a national scale.

But somehow that doesn't deter those devoted conspiracy theorists, and now there are millions of idiots in South Korea who shout "Yoon Again" and believe our lord savior Trump will come to Korea any day soon, smite Chinese spy Lee and communist Democratic Party from their seats, and restore Yoon at his rightful place at the presidential office.

(Really, South Korea was fortunate that Yoon had the charisma of a wet sack of potatoes. If he were half as good as Trump, who knows what would have happened ...)

So, if you listen to the news from South Korea, and somehow there's a lot of noise about Chinese masterminds controlling everything in South Korea ... well now you know what's going on.

1-6•4mo ago
You lost me at "Yoon overthrowing the constitution."
kepano•4mo ago
When I visited the National Museum of Korea in Seoul, one of my favorite parts was exploring the exhibit dedicated to the backing up state data — via calligraphy, letterpress, and stone carving.

> "The Veritable Records of the Joseon Dynasty, sometimes called sillok (실록) for short, are state-compiled and published records, documenting the reigns of the kings of the Joseon dynasty in Korea. Kept from 1392 to 1865, they comprise 1,893 volumes and are thought to be the longest continual documentation of a single dynasty in the world."

> "Beginning in 1445, they began creating three additional copies of the records, which they distributed at various locations around Korea for safekeeping."

https://en.wikipedia.org/wiki/Veritable_Records_of_the_Joseo...

After the Japanese and Qing invasions of Japan, King Hyeonjong (1659–1675) started a project to collect calligraphy works written by preceding Joseon kings and carve them into stone.

It's somewhat surprising that these values didn't continue to persist in the Korean government.

burnt-resistor•4mo ago
DR/BCP fail. The old adage companies that lose all of their data typically go out of business within 6 months I guess doesn't apply when it's the government.

At a minimum, they could've stored the important bits like financial transactions, personnel/HR records, and asset inventory database backups to Tarsnap [0] and shoved the rest in encrypted tar backups to a couple of different providers like S3 Glacier and/or Box.

Business impact analysis (BIA) is a straightforward way to assessing risks of probability of event * cost to recover from event = approximate budget for spending on mitigation.

And, PSA: test your backups and DR/BCP runbooks periodically!

0. https://www.tarsnap.com

sbinnee•4mo ago
What a sad news as a Korean to see a post about Korea at the top of HN during one of the largest Korean holiday.

I can share an anecdote how slow tech adoption is in Korea. It is not exactly about tech in public section but in private companies. I assume public section has slower adoption rate than private ones in general.

Just about a year ago I had a couple of projects with insurance companies. I won't name them but they are the largest ones whose headquarters you can find in the very center of Seoul. They often called me in because I was setting up on-premise servers for the projects. Not to mention that it was hard to understand their choices of database architecture to plug it into the server I was setting up, their data team seemed just incompetent, not knowing what they were doing.

The wildest thing I found was that most office workers seemed to be using windows 2000 to run their proprietary software. To be fair, I like software UIs with a lot of buttons and windows from that era. But alas, I didn't want to imagine myself connecting that legacy software to my then project service. It didn't go that far in the end.

v7engine•4mo ago
Do South Korean companies prefer hosting data on their own servers instead of using Public cloud providers like Azure, AWS, GCP?
sbinnee•4mo ago
Yes and no. They used to prefer everything on premise. Many try to move towards cloud especially newer companies. Major cloud providers you mentioned are not the usual choices though (maybe aws is the most common). They do have data centers in Seoul and try to expand their markets for South Korea. But government offers generous incentives for using domestic cloud providers like NHN which was mentioned in the article or Naver cloud. Why does this work? Because Korean services rarely target global markets mainly due to language barrier. Domestic cloud usage is sufficient enough.
bryanhogan•4mo ago
I think it's very interesting that Korea is probably the country with the fastest cultural adoption of new tech, e.g. #1 for ChatGPT, but on the other hand I can see as a web developer that new web tech is often adopteded at a very slow rate.
boodleboodle•4mo ago
We excel at things that look good on paper.
0xDEAFBEAD•4mo ago
Things South Korea is good at producing: Cars, ships, steel, semiconductors, electronics, medicines, tanks, aircraft parts, nuclear reactors...

Things South Korea is bad at producing: Software.

Not too bad overall.

trivo•4mo ago
Seems everyone ouside of US is bad at producing software.
flakeoil•4mo ago
Yes, and US bad at everything else.
0xDEAFBEAD•4mo ago
The US economy is one of the world's most diverse in terms of exports:

https://oec.world/en/visualize/tree_map/hs92/export/usa/all/...

Side note: Why is there so much fact-free anti-US sentiment on HN?

miningape•4mo ago
Orange man bad
_vqpz•4mo ago
Yes, he is orange and bad. Glad we agree.
a456463•4mo ago
I mean he is. But people don't want facts
shigawire•4mo ago
I'd say it is more because "Orange man says we are bad at X, but only he can make X great again™".

People start believing we can't do anything at all.

TulliusCicero•4mo ago
> Side note: Why is there so much fact-free anti-US sentiment on HN?

Look at basically any domain: whoever's in the lead, gets the most hate.

The US has the world's biggest economy and military, the most cultural power, the biggest tech industry, etc. The hate is inevitable. That's not to say that the US doesn't have plenty of very real problems -- obviously it does -- but it's just easier to dunk on the leader, which also means more empty, vacuous criticism.

flakeoil•4mo ago
> Why is there so much fact-free anti-US sentiment on HN?

Firstly, it's the US government themselves saying there are imbalances and therefore they have to add tariffs on imports from almost every country. It's the US government who spreads hate towards most other countries, not the other way around.

Secondly, could it be because people living in the US seem to not notice (or don't want to believe) the US is turning into a dictatorship and the rest of the world does. People don't like the new values of the USA, they liked the old values. If it continues like this, it's game over for the USA.

TulliusCicero•4mo ago
The US has super successful music and movie industries, puts out a lot of fossil fuels, hugely successful finance sector, and has the world's most powerful military. Really, the US has plenty of strengths to go along with its weaknesses.
pchew•4mo ago
Interesting interpretation of 'good' in regards to cars.
0xDEAFBEAD•4mo ago
My parents have been driving the same Hyundai for about 20 years. Never heard them complain about a problem.
TulliusCicero•4mo ago
> Things South Korea is good at producing: Cars, ships, steel, semiconductors, electronics, medicines, tanks, aircraft parts, nuclear reactors...

Also: music and TV shows.

> Things South Korea is bad at producing: Software.

Also: babies.

gkanai•4mo ago
Back when I worked for Mozilla, I had the chance to go to Seoul to meet with various companies and some governmental ministries. This was when Korean banks and ecommerce sites required Internet Explorer and Active-X controls for secure transactions. This meant that MacOS users or Linux users couldnt do secure transactions in Korea without emulating Win/IE.
niutech•4mo ago
What was the outcome of these meetings? Have they switched to Firefox?
gkanai•4mo ago
They never did afaict. Eventually smartphones became ubiquitous and I think most S. Koreans bank on their phones using apps. As for those who bank on computer, I dont know what happened when Active-X was deprecated. It was a poor decision by the S. Korean govt. to hang their hat on that technology.
sbinnee•4mo ago
They settled down with chromium based browsers. Microsoft was pushing Edge and Naver, the largest Korean search engine company, also developed Whale browser based on chromium.
m01•4mo ago
> I can share an anecdote how slow tech adoption is in Korea. It is not exactly about tech in public section but in private companies. I assume public section has slower adoption rate than private ones in general.

I guess it's not all tech, but at least in telecoms I thought they were very quick to adopt new tech? 2nd in the world to commercially deploy 3G W-CDMA, world first LTE-Advanced [1], "first fairly substantial deployments" of 5G [2]. 90% of broadband via fibre (used to be #1 amongst OECD countries for some time, now it's only just #2).

[1] https://en.wikipedia.org/wiki/SK_Telecom#History

[2] https://en.wikipedia.org/wiki/5G#Deployment

[3] https://www.oecd.org/en/topics/sub-issues/broadband-statisti... -> Percentage of fibre connections in total broadband (June 2024) spreadsheet link

nwellinghoff•4mo ago
Could be incompetence. Highly likely. Or could be…suspect.
marcus_holmes•4mo ago
Is it just me, or is this a massively better result than "1PB of government documents containing sensitive data about private individuals was exfiltrated to a hacker group and found for sale"?

I applaud them for honouring their obligation to keep such data private. And encourage them to work on their backup procedures while continuing to honour that obligation.

jeroenhd•4mo ago
A sibling comment links to a phrack page (https://phrack.org/issues/72/7_md) about North Korean infiltration in South Korean systems. The timing on that page and the fire make for a possible, though in my opinion wildly unlikely, scenario where either a saboteur started the fire when investigations were supposed to start, or (if you like hacking movies) that a UPS battery was rigged to cause a fire by the spies inside of the South Korean systems.

It's possible that this is all just a coincidence, but the possibility that North Korea is trying to cover their tracks is there.

awesomeusername•4mo ago
I'm CTO of a TINY company, with pretty much exactly half this data. I run all storage and offside backups personally, because I can't afford a full time sysadmin yet.

And the cost of everything is PAIN to us.

If our building burned down we would lose data, but only the data we are Ok with losing in a fire.

I'd love to know the real reason. It's not some useless tech... it's politics, surely.

dbuser99•4mo ago
Sometimes it is convenient that there are no backups. Just saying…
Jean-Papoulos•4mo ago
>the G-Drive’s structure did not allow for external backups

That should be classified as willful sabotage. Someone looked at the cost line for having backups in another location and slashed that budget to make numbers look good.

pammf•4mo ago
Real reason is humans are way too optimistic in planning and, for some reason, tend to overlook even more rare, but catastrophic risks.

I’m almost sure that the system had some sort of local replication and versioning that was enough to deal with occasional deletions, rollbacks, and single non-widespread hardware failures, so only the very catastrophic scenario of losing all servers at the same time (that for sure wouldn’t happen anytime soon) was uncovered.

mrweasel•4mo ago
At a previous job I was not allowed to do disaster planning with customers, after I told one of them that it was entirely possible to take out both our datacenters with one plane crash. The two locations where a "safe" distance apart, but where also located fairly close the approach of an airport, and a crashing passenger jet is big enough to take out both buildings.

Apparently I plan for the rather rare catastrophes, and not those customers care about day to day.

mewpmewp2•4mo ago
However it's also possible that an asteroid could destroy everything or a nuclear war.
lordnacho•4mo ago
But it's extra surprising, because South Korea is a country where every young man is conscripted due to the threat of war with the north. If the conflict is serious enough for that, why hasn't someone thought about losing all the government data in a single artillery strike?
athrowaway3z•4mo ago
It would be wise for governments to define "backup" as something that is at least 1km away.
tylervigen•4mo ago
Probably farther than that, right? Plenty of natural disasters, including floods and wildfires, can affect an area larger than 1 km.
anticensor•4mo ago
Farther than 100km..
ptojr•4mo ago
I wouldn't be surprised if someone caused this intentionally.
Ylpertnodi•4mo ago
> I wouldn't be surprised if someone caused this intentionally.

What, no backup(s) set up? Hmmm, possibly. But, they're'd be a paper trail.

Imagine all the scrabbling going on right now - people desperately starting to cover their arses. But chances are, what they need has just burnt down, with no backups.

Lumoscore•4mo ago
Wow. That is genuinely one of the most terrifying headlines I've read all year.

Seriously, "no backups available" for a national government's main cloud storage? That’s not a simple IT oversight; that’s an epic, unforgivable institutional mistake.

It completely exposes the biggest fear everyone in tech has: putting all the eggs in one big physical basket.

I mean, we all know the rule: if it exists in only one place, it doesn't really exist. If your phone breaks, you still have your photos on a different server, right? Now imagine that basic, common-sense rule being ignored for a country’s central data.

The fire itself is a disaster, but the real catastrophe is the planning failure. They spent millions on a complex cloud system, but they skipped the $5 solution: replicating the data somewhere else—like in a different city, or even just another building across town.

Years of official work, policy documents, and data—just gone, literally up in smoke, because they violated the most fundamental rule of data management. This is a massive, expensive, painful lesson for every government and company in the world: your fancy cloud setup is worthless if your disaster recovery plan is just "hope the building doesn't burn down." It’s an infrastructure nightmare.

hooskerdu•4mo ago
Does a half-decent job of breaking down how things were affected https://youtu.be/j454KF26IWw
NinjaTrance•4mo ago
The easy solution would be to use something like Amazon S3 to store documents as objects and let them worry about backup; but governments are worried (and rightly so) about the US government spying on them.

Thus, the not-so-easy-but-arguably-better solution would be to self-host an open source S3-compatible object storage solution.

Are there any good open source alternatives to S3?

thepill•4mo ago
I recently learned about https://garagehq.deuxfleurs.fr/ but i have no expirience using it
jankal•4mo ago
When you wish the S. to be an N.
ken47•4mo ago
Coincidence is God’s way of remaining anonymous.
admiralrohan•4mo ago
Is there any solution to these kind of issues other than having multiple backups and praying that not all of them will caught fire at the same time?
rstuart4133•4mo ago
Backups are best thought of as a multi dimensional problem, as in, they can be connected in many dimensions. Destroy a backup, and all those in the same dimension are also destroyed. This means you must have to have redundancy in many dimensions. That all sounds a bit abstract, so ...

One dimension is two backups can be close in space (ie, physically close, as happened here). Ergo backups must be physically separated.

You've heard RAID can't be a backup? Well it sort of can, and the two drives can be physically separated in space. But they are connected in another dimension - time, as in they reflect the data at the same instant in time. So if you have a software failure that corrupts all copies, your backups are toast as you can't go back to a previous point in time to recover.

Another dimension is administrative control. Google Drive for example will backup your stuff, and separate it in space and time. But they are connected by who controls them. If you don't pay the bill or piss Google off, you've lost all your backups. I swear every week I see a headline saying someone lost their data this way.

Then backups can be all connected to you via one internet link, or connected to one electrical grid, or even one country that goes rogue. All of those are what I called dimensions, that you have to ensure your backups are held at a different location in each dimension.

Sorry, that didn't answer your question. The answer no. It's always possible all copies could be wiped out at the same time. You are always relying on luck, and perhaps prayer if you think that helps your luck.

admiralrohan•4mo ago
Interesting way to explain this through multiple dimensions angle.
OJFord•4mo ago
> The scale of damage varies by agency. [...] The Office for Government Policy Coordination, which used the platform less extensively,

Amazing

alwahi•4mo ago
Government fires are never a mistake
phatfish•4mo ago
"The stored data amounts to 858TB (terabytes), equivalent to 449.5 billion A4 sheets"

Just so we can all visualise this in an understandable way, if laid end-to-end how many times round the world would the A4 sheets go?

And what is their total area in football fields?

akpa1•4mo ago
190,813,414 and a bit times round the equator if you place them long edge to long edge
porkbrain•4mo ago
If you stacked them they would be about fifty thousand Popocatépetls high, give or take a few zeroes.

UPDATE: as sibling pointed out indirectly, it's eight thousand Popocatépetls [0].

[0]: https://www.wolframalpha.com/input?i=449.5+*10%5E9+*+%28thic...

panda-giddiness•4mo ago
Attached end-to-end, they'd extend almost from the Earth to the Sun [1].

Placed in a grid, they'd cover an area larger than Wales [2].

Piled on top of each other, they'd reach a tenth the distance to the moon [3].

---

[1] https://www.wolframalpha.com/input?i=449.5+*10%5E9+*+%28leng...

[2] https://www.wolframalpha.com/input?i=449.5+*10%5E9+*+%28area...

[3] https://www.wolframalpha.com/input?i=449.5+*10%5E9+*+%28thic...

nonethewiser•4mo ago
I am shocked that 1 and 2 are both true. I would have guessed 1 would have implied a much larger area than Wales.
Panzer04•4mo ago
Funny how unintuitive N^2 growth can be :D
Cthulhu_•4mo ago
I love that people are still trying to put data on A4s and we're long past the point of being able to visualize it.

That said, if I'm ever fuck-you rich, I'm going to have a pyramid built to bury me in and a library of hardcover printed wikipedia.

BonitaPersona•4mo ago
Actual football fields please, the International Standard Unit footbal field, used in SI countries.
andrewmcwatters•4mo ago
Regional SI football fields or Cup? ;)
nonethewiser•4mo ago
Football or soccer?
ramraj07•4mo ago
I know you want to think of this is as a lot of data, but this really isn't that much. It'll cost less than a few thousand to keep a copy in glacier on s3, or a single IT dude could build a NAS at his home that could easily hold this data for a few tens of thousands tops. The entire thing.
nicce•4mo ago
Close to 1 petabyte for home server is quite much, honestly. It will cost tens of thousands dollars. But yeah, on government level, nothing.
flufluflufluffy•4mo ago
Double-sided?
hamdouni•4mo ago
It is not cloud storage if it's not resilient... It's just remote storage.
alexnewman•4mo ago
Indistinguishable from crime.
heisenbit•4mo ago
It is very unlikely that the low performance would have prevented any backup. This was slow changing data. Here the real inability to a good, solid backup was taken as an excuse to do anything at all.
ZiiS•4mo ago
Are we actually sure they didn't do due diligence?

This is the individual's work files of civil servants. These will overwhelmingly be temporary documents they were legally obliged to delete at some point in the last 8 years. Any official filings or communications would have been to systems of record that were not effected.

This is more that a very large fire, probably unlucky for once a decade, caused civil servants to lose hours of work in files they were working on. A perfect system could have obviously prevented this and ensured availability, but not without cost.

rtkwe•4mo ago
Well it's really in the cloud(s) now! /s

No offsite backups is a real sin, sounds like a classic case where the money controllers thought 'cloud' automatically meant AWS level redundant cloud and instead they had a fancy centralized datacenter with insufficient backups.

ionwake•4mo ago
Its bizarre how easy it to make smart people on HN just assume people who are doing something weird are just low IQ.

Its almost weirdly a personality trait that a trained programmer just goes around believing everyone around him doesnt understand what way the wind blows.

Government installation for backups for a government ruled by a weird religious sect, have no offsite backups, it goes up in flames? Well clearly they were not smart enough to understand what an off-site backup is.

Its like wtf guys?

Now dont get me wrong, occams razor, they tried to save a few bucks, it all went Pete tong, but cmon, carelessness , chance, sure, but I doubt its all down to stupidity.

voidhorse•4mo ago
It's a common problem in any field that presumably revolves around intellect, since supposedly being smarter gets you further (it may, but it is not enough on its lonesome).

People, in general, severely overestimate their own intelligence and grossly underestimate the intelligence of others.

Consider for a moment that most of the geniuses on hacker news are not even smart enough to wonder whether or not something like IQ is actually a meaningful or appropriate way to measure intelligence, examine the history of this notion, question what precisely it is we mean by that term, how its use can vary with context, etc. etc.

ionwake•4mo ago
Is there a better word/assessment for "ability"?

Just wondering what it would be, just "success" in a domain?

I agree with you just wondering

libria•4mo ago
Yeah, all this chatter about technologies and processes that could have saved this: you don't think someone in all of Korean government knew about that?

The problem is more likely culture, hierarchy or corruption. Guaranteed several principal security architects have been raising the alarm on this internally along with much safer, redundant, secure alternatives that came with an increased cost. And decision makers who had a higher rank/social/networking advantage shot it down. Maybe the original storage designer was still entrenched there and sabotaging all other proposals out of pride. Or there's an unspoken business relationship with the another department providing resources for that data center that generates kickbacks.

Assuming nobody knows how to do an offsite backup or is plain ignorant of risk over there is arrogant.

dudeinjapan•4mo ago
People keep pointing the finger at North Korea but personally I suspect a Protoss High Templar causing battery overcharge is more plausible.
dangoodmanUT•4mo ago
TWO IS ONE

ONE IS NONE

abtinf•4mo ago
Jim Hacker: How am I going to explain the missing documents to The Mail?

Sir Humphrey: Well, this is what we normally do in circumstances like these.

Jim Hacker: (reading) This file contains the complete set of papers, except for a number of secret documents, a few others which are part of still active files, some correspondence lost in the floods of 1967… Was 1967 a particularly bad winter?

Sir Humphrey: No, a marvelous winter. We lost no end of embarrassing files.

eranation•4mo ago
To those wondering where it's from: https://www.imdb.com/title/tt0751825/quotes/?item=qt0238072
monster_truck•4mo ago
One of the lessons I learned from my Network Administration teacher was that if you're ultimately responsible for it and they say no backups?

You tack on the hours required to do it yourself (this includes the time you must spend actually restoring from the backups to verify integrity, anything less can not be trusted). You keep one copy in your safe, and another copy in a safety deposit box at the bank. Nobody ever has to know. It is inevitable that you will save your own ass, and theirs too.

Shit happens.

commandlinefan•4mo ago
I have full confidence that management will learn nothing from this object lesson.
wigster•4mo ago
That's not a cloud. That is smoke
syngrog66•4mo ago
sounds like will become textbook case lesson about backups and disaster planning
ratelimitsteve•4mo ago
if you love it, make a copy of it

this is the kind of thing that is so fundamental to IT that not doing it is at best negligence and at worst intentional malpractice. There is simply no situation that justifies not having backups and I think it might be worth assuming intentionality here, at least for purposes of investigation. It looks like an accident but someone (perhaps several someones, somefew if you will) made a series of shriekingly bad decisions in order to put themselves in a precarious place where an accident could have an effect like this.

saltyoldman•4mo ago
> A firefighter cools down burnt batteries at the National Information Resources Service (NIRS) in Daejeon on Sept. 27. [YONHAP]

New caption:

> A firefighter wants to see the cool explosive reaction between water and lithium at the National Information Resources Service (NIRS) in Daejeon on Sept. 27. [YONHAP]

calebm•4mo ago
Mr Robot was here?
ryanmcbride•4mo ago
Here I was self conscious about my homelab setup and turns out I was already way ahead of the second most technologically advanced nation in the world!
achow•4mo ago
A little more informative source:

https://www.datacenterdynamics.com/en/news/858tb-of-governme...

- G-drive stands for Government Drive

- The incident was caused due to Lithium battery fire

- The drive was of 858TB capacity

- No backup because “The G-Drive couldn’t have a backup system due to its large capacity” (!!)

ycombinatrix•4mo ago
Guess they'll have to ask China for their backup.
4WIW•4mo ago
The board of directors should now fire the management over such as gross mismanagement. Then, the board of directors should be fired for not proactively requiring backups.
bornfreddy•4mo ago
Is it possible that the fire was started by malicious software, for example by somehow gaining control of UPS batteries' controllers or something similar?
polynomial•4mo ago
Insisting on having a SPF (single point of failure) for... reasons.
bvan•4mo ago
How could you even define that as a ‘cloud’? Sounds like good old client-server on a single premise, and no backup whatsoever. Can’t have had very secure systems either.. perhaps they can buy back some of the data off the dark web.. or their next-door neighbor.
garfieldnate•4mo ago
I know Korea is a fast-changing place, but while I was there I was taught and often observed that the value of "ppalli ppalli" (hurry hurry) was often applied to mean that a job was better done quickly than right, with predictably shoddy results. Obviously I have no insight into what happened here, but I can easily imagine a group of very hurried engineers feeling the pressure to just be done with their G-Drive tasks and move on to other suddenly urgent things. It's easy to put off preparation for something you don't feel will ever come.

I'm going to check all the smoke detectors in my house tomorrow :D

stefek99•4mo ago
Each government should run a drill backup exercise.
bigjobby•4mo ago
This is a great fear of mine. I have data backups of backups. A 2 year project is coming to a close soon and I'll be able to relax again. Bring back paper printouts.
ninjaa•4mo ago
Don't call it a coverup
elgolem89•3mo ago
In Latin America, this is the normal way to erase evidence of corruption...