frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Pebble Watch software is now 100% open source

https://ericmigi.com/blog/pebble-watch-software-is-now-100percent-open-source
590•Larrikin•5h ago•100 comments

Claude Advanced Tool Use

https://www.anthropic.com/engineering/advanced-tool-use
275•lebovic•4h ago•104 comments

Shai-Hulud Returns: Over 300 NPM Packages Infected

https://helixguard.ai/blog/malicious-sha1hulud-2025-11-24
826•mrdosija•13h ago•679 comments

Three Years from GPT-3 to Gemini 3

https://www.oneusefulthing.org/p/three-years-from-gpt-3-to-gemini
152•JumpCrisscross•1d ago•93 comments

Unpowered SSDs slowly lose data

https://www.xda-developers.com/your-unpowered-ssd-is-slowly-losing-your-data/
84•amichail•4h ago•51 comments

Claude Opus 4.5

https://www.anthropic.com/news/claude-opus-4-5
688•adocomplete•5h ago•311 comments

Cool-retro-term: terminal emulator which mimics look and feel of the old CRTs

https://github.com/Swordfish90/cool-retro-term
143•michalpleban•6h ago•59 comments

Neopets.com Changed My Life (2019)

https://annastreetman.com/2019/05/19/how-neopets-com-changed-my-life/
35•bariumbitmap•5d ago•14 comments

Moving from OpenBSD to FreeBSD for firewalls

https://utcc.utoronto.ca/~cks/space/blog/sysadmin/OpenBSDToFreeBSDMove
128•zdw•5d ago•60 comments

Show HN: I built an interactive HN Simulator

https://news.ysimulator.run/news
116•johnsillings•6h ago•58 comments

The Bitter Lesson of LLM Extensions

https://www.sawyerhood.com/blog/llm-extension
70•sawyerjhood•5h ago•29 comments

What OpenAI did when ChatGPT users lost touch with reality

https://www.nytimes.com/2025/11/23/technology/openai-chatgpt-users-risks.html
72•nonprofiteer•18h ago•76 comments

PS5 now costs less than 64GB of DDR5 memory. RAM jumps to $600 due to shortage

https://www.tomshardware.com/pc-components/ddr5/64gb-of-ddr5-memory-now-costs-more-than-an-entire...
222•speckx•4h ago•140 comments

Show HN: OCR Arena – A playground for OCR models

https://www.ocrarena.ai/battle
29•kbyatnal•3d ago•10 comments

Bytes before FLOPS: your algorithm is (mostly) fine, your data isn't

https://www.bitsdraumar.is/bytes-before-flops/
31•bofersen•1d ago•7 comments

Everything you need to know about hard drive vibration (2016)

https://www.ept.ca/features/everything-need-know-hard-drive-vibration/
11•asdefghyk•4d ago•4 comments

Chrome Jpegxl Issue Reopened

https://issues.chromium.org/issues/40168998
205•markdog12•11h ago•77 comments

TSMC Arizona outage saw fab halt, Apple wafers scrapped

https://www.culpium.com/p/tsmc-arizona-outage-saw-fab-halt
150•speckx•5h ago•60 comments

You can see a working Quantum Computer in IBM's London office

https://www.ianvisits.co.uk/articles/you-can-see-a-working-quantum-computer-in-ibms-london-office...
25•thinkingemote•2d ago•5 comments

Corvus Robotics (YC S18): Hiring Head of Mfg/Ops, Next Door to YC Mountain View

1•robot_jackie•7h ago

Random lasers from peanut kernel doped with birch leaf–derived carbon dots

https://www.degruyterbrill.com/document/doi/10.1515/nanoph-2025-0312/html
4•PaulHoule•5d ago•0 comments

Inside Rust's std and parking_lot mutexes – who wins?

https://blog.cuongle.dev/p/inside-rusts-std-and-parking-lot-mutexes-who-win
121•signa11•4d ago•48 comments

Launch HN: Karumi (YC F25) – Personalized, agentic product demos

http://karumi.ai/
20•tonilopezmr•5h ago•10 comments

Building the largest known Kubernetes cluster

https://cloud.google.com/blog/products/containers-kubernetes/how-we-built-a-130000-node-gke-cluster/
90•TangerineDream•3d ago•62 comments

Mind-reading devices can now predict preconscious thoughts

https://www.nature.com/articles/d41586-025-03714-0
107•srameshc•5h ago•75 comments

NSA and IETF, part 3: Dodging the issues at hand

https://blog.cr.yp.to/20251123-dodging.html
294•upofadown•12h ago•158 comments

Fifty Shades of OOP

https://lesleylai.info/en/fifty_shades_of_oop/
34•todsacerdoti•14h ago•5 comments

GrapheneOS migrates server infrastructure from France

https://www.privacyguides.org/news/2025/11/22/grapheneos-migrates-server-infrastructure-from-fran...
187•01-_-•5h ago•67 comments

The history of Indian science fiction

https://altermag.com/articles/the-secret-history-of-indian-science-fiction
72•adityaathalye•2d ago•6 comments

Implications of AI to schools

https://twitter.com/karpathy/status/1993010584175141038
131•bilsbie•6h ago•141 comments
Open in hackernews

Unpowered SSDs slowly lose data

https://www.xda-developers.com/your-unpowered-ssd-is-slowly-losing-your-data/
84•amichail•4h ago

Comments

paulkrush•3h ago
I had to search around and feel like a dork not knowing this. I have my data backed up, but I keep the SSDs because it's nice to have the OS running like it was... I guess I need to be cloning the drives to ISOs and storing on spinning rust.
dpoloncsak•3h ago
I could be wrong, but I believe the general consensus is along the lines of "SSDs for in-use data, it's quicker and wants to be powered on often. HDDs for long-term storage, as they don't degrade when not in use nearly as fast as SSDs do.
joezydeco•1h ago
I've been going through stack of external USB drives with laptop disks in them. They're all failing in some form or another. I'm going to have to migrate it all to a NAS with server-class drives I guess
Yokolos•56m ago
At the very least, you can usually still get the data off of them. Most SSDs I've encountered with defects failed catastrophically, rendering the data completely inaccessible.
PunchyHamster•35m ago
I'd imagine HDDs also don't like not spinning for years(as mechanical elements generally like to be used from time to time). But at least platters itself are intact
pluralmonad•1h ago
I learned this when both my old laptops would no longer boot after extended off power time (couple years). They were both stored in a working state and later both had SSDs that were totally dead.
gosub100•1h ago
or you could power them on 1-2x /year.
ggm•58m ago
Power them on and run something to exercise the read function over every bit. Thats why a ZFS filesystem integrity check/scrub is the useful model.

I'm unsure if dd if=/the/disk of=/dev/null does the read function.

fragmede•25m ago
why would it not? it's a low level tool to do exactly that. you could "of" it to somewhere else if you're worried it's not. I like to | hexdump -C, on an xterm set to a green font on a black background for a real matrix movie kind of feel.
sevensor•3h ago
Flash is programmed by increasing the probability that electrons will tunnel onto the floating gate and erased by increasing the probability they will tunnel back off. Those probabilities are never zero. Multiply that by time and the number of cells, and the probability you don’t end up with bit errors gets quite low.

The difference between slc and mlc is just that mlc has four different program voltages instead of two, so reading back the data you have to distinguish between charge levels that are closer together. Same basic cell design. Honestly I can’t quite believe mlc works at all, let alone qlc. I do wonder why there’s no way to operate qlc as if it were mlc, other than the manufacturer not wanting to allow it.

Someone•1h ago
> I do wonder why there’s no way to operate qlc as if it were mlc, other than the manufacturer not wanting to allow it.

You can run an error-correcting code on top of the regular blocks of memory, storing, for example (really an example; I don’t know how large the ‘blocks’ that you can erase are in flash memory), 4096 bits in every 8192 bits of memory, and recovering those 4096 bits from each block of 8192 bits that you read in the disk driver. I think that would be better than a simple “map low levels to 0, high levels to 1” scheme.

bobmcnamara•1h ago
> I do wonder why there’s no way to operate qlc as if it were mlc, other than the manufacturer not wanting to allow it.

Loads of drives do this(or SLC) internally. Though it would be handy if a physical format could change the provisioning at the kernel accessible layer.

em500•57m ago
> Honestly I can’t quite believe mlc works at all, let alone qlc. I do wonder why there’s no way to operate qlc as if it were mlc, other than the manufacturer not wanting to allow it.

Manufacturers often do sell such pMLC or pSLC (p = pseudo) cells as "high endurance" flash.

testartr•45m ago
the market demands mostly higher capacity

tlc/qlc works just fine, it's really difficult to consume the erase cycles unless you really are writing 24/7 to the disk at hundred of megabytes a second

tcfhgj•23m ago
I have a MLC SSD with TBW/GB much higher than the specified TBW/GB guarantee of usual qlc SSDs
55873445216111•14m ago
All the big 3D NAND makers have already switched from floating gate to charge trapping. Basically the same as what you describe but basically the electrons get stuck in a non-conductive region instead of on an insulated gate.
brian-armstrong•1h ago
Powering the SSD on isn't enough. You need to read every bit occasionally in order to recharge the cell. If you have them in a NAS, then using a monthly full volume check is probably sufficient.
Izkata•1h ago
Huh. I wonder if this is why I'd sometimes get random corruption on my laptop's SSD. I'd reboot after a while and fsck would find issues in random files I haven't touched in a long time.
formerly_proven•1h ago
Unless your setup is a very odd Linux box, fsck will never check the consistency of file contents.
suspended_state•37m ago
But metadata is data too, right? I guess the next question is, would it be possible for parts of the FS metadata to remain untouched for a time long enough for the SSD data corruption process to occur.
brian-armstrong•1h ago
It's quite possible. Some SSDs are worse offenders for this than others. I have some Samsung 870 EVOs that lost data the way you described. Samsung knew about the issue and quietly swept it under the rug with a firmware update, but once the data was lost, it was gone for good.
PunchyHamster•36m ago
Huh, I thought I got some faulty one, mine died shortly after warranty ended (and had a bunch of media errors before that)
ethin•22m ago
I ran into this firmware bug with the two drives in my computer. They randomly failed after a while -- and by "a while" I mean less than a year of usage. Took two replacements before I finally realized that I should check for an fw update
gruez•14m ago
If you're getting random corruption like that, you should replace the SSD. SSDs (and also hard drives) already have built-in ECC, so if you're getting errors on top, it not just random cosmic rays. It's your SSD being extra broken, and doesn't bode too well for the health of the SSD as a whole.
derkades•1h ago
Isn't that the SSD controller's job?
brian-armstrong•1h ago
It would surely depend on the SSD and the firmware it's running. I don't think you can entirely count on it. Even if it were working perfectly, and your strategy was to power the SSD on periodicially to refresh the cells, how would you know when it had finished?
ethin•24m ago
NVMe has read recovery levels (RRLs) and two different self-test modes (short and long) but what both of those modes do is entirely up to the manufacturer. So I'd think the only way to actually do this is to have host software do it, no? Or would even that not be enough? I mean, in theory the firmware could return anything to the host but... That feels too much like a conspiracy to me?
formerly_proven•1h ago
> Even the cheapest SSDs, say those with QLC NAND, can safely store data for about a year of being completely unpowered. More expensive TLC NAND can retain data for up to 3 years, while MLC and SLC NAND are good for 5 years and 10 years of unpowered storage, respectively.

This is somewhat confused writing. Consumer SSDs usually do not have a data retention spec, even in this very detailed Micron datasheet you won't find it: https://advdownload.advantech.com/productfile/PIS/96FD25-S2T... Meanwhile the data retention spec for enterprise SSDs is at the end of their rated life, which is usually a DPWD/TBW intensity you won't reach in actual use anyway - that's where numbers like "3 months @ 50 °C" or whatever come from.

In practice, SSDs don't tend to loose data over realistic time frames. Don't hope for a "guaranteed by design" spec on that though, some pieces of silicon are more equal than others.

Yokolos•1h ago
Any given TBW/DWPD values are irrelevant for unpowered data retention. Afaik, nobody gives these values in their datasheet and I'm wondering where their numbers are from, because I've never seen anything official. At this point I'd need to be convinced that the manufacturers even know themselves internally, because it's never been mentioned by them and it seems to be outside the intended use cases for SSDs
bossyTeacher•1h ago
This is why I would rather pay someone a couple of dollars per year to handle all this for me. If need be pay two providers to have a backup.
loloquwowndueo•1h ago
Who do you pay for this? (To rephrase : which cloud storage vendors do you use?) interested in the $2/month price point :)
ggm•57m ago
tell me about this $2/week filestore option. I'm interested.
867-5309•31m ago
continuing the bizarre trend, I'm here for the $2/day deal
PunchyHamster•33m ago
Backblaze B2 is $6TB/mo, so if you have around 300GB... stuff like restic or kopia backups nicely to it
Terr_•26m ago
Recently started fiddling with restic and B2, it worked fairly seamlessly once I stopped trying too hard being fancy with permissions and capabilities (cap_dac_read_search). There were some conflicts trying to have both "the way that works interactively" [0] versus "the way that works well with systemd". [AmbientCapabilities=]

One concern I have is B2's downloading costs means verifying remote snapshots could get expensive. I suppose I could use `restic check --read-data-subset X` to do a random spot-check of smaller portions of the data, but I'm not sure how valuable that would be.

I like how it resembles LUKS encryption, where I can have one key for the automated backup process, and a separate memorize-only passphrase for if things go Very Very Wrong.

[0] https://restic.readthedocs.io/en/latest/080_examples.html#ba...

Terr_•31m ago
I assume "couple of" was figurative, to indicate the cost is substantially less than managing your own bank of SSDs and ensuring it is periodically powered etc.

[Edit: LOL, I see someone else posted literally the same example within the same minute. Funny coincidences.]

That said, they could also be storing relatively small amounts. For example, I back up to Backblaze B2, advertised at $6/TB/month, so ~300 MB at rest will be a "couple" bucks.

traceroute66•1h ago
I assume this blog is a re-hash of the JDEC retention standards[1].

The more interesting thing to note from those standards is that the required retention period differs between "Client" and "Enterprise" category.

Enterprise category only has power-off retention requirement of 3 months.

Client category has power-off retention requirement of 1 year.

Of course there are two sides to every story...

Enterprise category standard has a power-on active use of 24 hours/day, but Client category only intended for 8 hours/day.

As with many things in tech.... its up to the user to pick which side they compromise on.

[1]https://files.futurememorystorage.com/proceedings/2011/20110...

throw0101a•43m ago
> I assume this blog is a re-hash of the JDEC retention standards[1].

Specifically in JEDEC JESD218. (Write endurance in JESD219.)

tcfhgj•31m ago
With 1 year power-off retention you still loose data, so still a compromise on data retention
testartr•49m ago
what is the exact protocol to "recharge" an ssd which was offline for months?

do I just plug it in and let the computer on for a few minutes? does it needs to stay on for hours?

do I need to run a special command or TRIM it?

PunchyHamster•37m ago
I'd imagine full read of the whole device might trigger any self-preservation, but I'd also imagine it's heavily dependent on manufacturer and firmware
PaulKeeble•26m ago
We really don't know. One thing I wish some of these sites would do is actually test how long it takes for the drives to decay and also do a retest after they have been left powered for say 10 minutes to an hour, read completely, written to a bit etc and see if they can determine what a likely requirement is.

The problem is the test will take years, be out of date by the time its released and new controllers will be out with potentially different needs/algorithms.

unsnap_biceps•4m ago
There was one guy who tested this

https://www.tomshardware.com/pc-components/storage/unpowered...

    The data on this SSD, which hadn't been used or powered up for two years, was 100% good on initial inspection. All the data hashes verified, but it was noted that the verification time took a smidgen longer than two years previously. HD Sentinel tests also showed good, consistent performance for a SATA SSD.
    Digging deeper, all isn't well, though. Firing up Crystal Disk Info, HTWingNut noted that this SSD had a Hardware ECC Recovered value of over 400. In other words, the disk's error correction had to step in to fix hundreds of data-based parity bits.
    ...
    As the worn SSD's data was being verified, there were already signs of performance degradation. The hashing audit eventually revealed that four files were corrupt (hash not matching). Looking at the elapsed time, it was observed that this operation astonishingly took over 4x longer, up from 10 minutes and 3 seconds to 42 minutes and 43 seconds.
    Further investigations in HD Sentinel showed that three out of 10,000 sectors were bad and performance was 'spiky.' Returning to Crystal Disk Info, things look even worse. HTWingNut notes that the uncorrectable sectors count went from 0 to 12 on this drive, and the hardware ECC recovered value went from 11,745 before to 201,273 after tests on the day.
antisthenes•17m ago
I would run something like CHKDSK, or write a script to calculate a hash of every file on disk.

No idea if that's enough, but it seems like a reasonable place to start.

BaardFigur•24m ago
I don't use my drive much. I still boot it up snd write some data, just not the long term one. Am I in risk?
zozbot234•11m ago
AIUI, informal tests have demonstrated quite a bit of data corruption in Flash drives that are literally so worn out that they might as well be about to fail altogether - well beyond any manufacturer's actual TBW specs - but not otherwise, least of all in new drives that are only written once over for the test. It seems that if you don't wear out your drive all that much you'll have far less to worry about.
yapyap•23m ago
good to know but apart from some edge cases this doesnt matter that much
dale_glass•17m ago
So on the off-chance that there's a firmware engineer in here, how does this actually work?

Like does a SSD do some sort of refresh on power-on, or every N hours, or you have to access the specific block, or...? What if you interrupt the process, eg, having a NVMe in an external case that you just plug once a month for a few minutes to just use it as a huge flash drive, is that a problem?

What about the unused space, is a 4 TB drive used to transport 1 GB of stuff going to suffer anything from the unused space decaying?

It's all very unclear about what all of this means in practice and how's an user supposed to manage it.

zozbot234•6m ago
Typically unused empty space is a good thing, as it will allow drives to run in MLC or SLC mode instead of their native QLC. (At least, this seems to be the obvious implication from performance testing, given the better performance of SLC/MLC compared to QLC.) And the data remanence of SLC/MLC can be expected to be significantly better than QLC.
gruez•1m ago
>as it will allow drives to run in MLC or SLC mode instead of their native QLC

That depends on the SSD controller implementation, specifically whether it proactively moves stuff from the SLC cache to the TLC/QLC area. I expect most controllers to do this, given that if they don't, the drive will quickly lose performance as it fills up. There's basically no reason not proactively move stuff over.

tzs•15m ago
What about powered SSDs that contain files that are rarely read?

My desktop computer is generally powered except when there is a power failure, but among the million+ files on its SSD there are certainly some that I do not read or write for years.

Does the SSD controller automatically look for used blocks that need to have their charge refreshed and do so, or do I need to periodically do something like "find / -type f -print0 | xargs -0 cat > /dev/null" to make sure every file gets read occasionally?