frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Tiny C Compiler

https://bellard.org/tcc/
102•guerrilla•3h ago•44 comments

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
186•valyala•7h ago•34 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
110•surprisetalk•7h ago•116 comments

Brookhaven Lab's RHIC concludes 25-year run with final collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
43•gnufx•6h ago•45 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
130•mellosouls•10h ago•280 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
880•klaussilveira•1d ago•269 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
129•vinhnx•10h ago•15 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
166•AlexeyBrin•12h ago•29 comments

The F Word

http://muratbuffalo.blogspot.com/2026/02/friction.html
97•zdw•3d ago•46 comments

FDA intends to take action against non-FDA-approved GLP-1 drugs

https://www.fda.gov/news-events/press-announcements/fda-intends-take-action-against-non-fda-appro...
60•randycupertino•2h ago•90 comments

First Proof

https://arxiv.org/abs/2602.05192
96•samasblack•9h ago•63 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
265•jesperordrup•17h ago•86 comments

I write games in C (yes, C) (2016)

https://jonathanwhiting.com/writing/blog/games_in_c/
167•valyala•7h ago•148 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
85•thelok•9h ago•18 comments

Eigen: Building a Workspace

https://reindernijhoff.net/2025/10/eigen-building-a-workspace/
4•todsacerdoti•4d ago•1 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
549•theblazehen•3d ago•203 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
49•momciloo•7h ago•9 comments

Show HN: A luma dependent chroma compression algorithm (image compression)

https://www.bitsnbites.eu/a-spatial-domain-variable-block-size-luma-dependent-chroma-compression-...
26•mbitsnbites•3d ago•2 comments

The silent death of Good Code

https://amit.prasad.me/blog/rip-good-code
48•amitprasad•1h ago•47 comments

Selection rather than prediction

https://voratiq.com/blog/selection-rather-than-prediction/
24•languid-photic•4d ago•6 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
246•1vuio0pswjnm7•13h ago•388 comments

Microsoft account bugs locked me out of Notepad – Are thin clients ruining PCs?

https://www.windowscentral.com/microsoft/windows-11/windows-locked-me-out-of-notepad-is-the-thin-...
80•josephcsible•5h ago•107 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
108•onurkanbkrc•12h ago•5 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
138•videotopia•4d ago•44 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
57•rbanffy•4d ago•17 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
215•limoce•4d ago•123 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
303•alainrk•12h ago•482 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
48•marklit•5d ago•9 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
121•speckx•4d ago•185 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
294•isitcontent•1d ago•39 comments
Open in hackernews

SSD-IQ: Uncovering the Hidden Side of SSD Performance [pdf]

https://www.vldb.org/pvldb/vol18/p4295-haas.pdf
59•jandrewrogers•5mo ago

Comments

jeffbee•5mo ago
Seems like the color codes in Table 3 are reversed? Higher write application factors are green and lower ones are red.
djoldman•5mo ago
That table is really confusing as the colors have wildly different meanings depending on the row.
jmpman•5mo ago
Feels like a paper that should have been published about 15 years ago.
tanelpoder•5mo ago
In the database-nerd world, we had something like this about ~10 years ago, written by @flashdba. Still a good read:

https://flashdba.com/category/storage-for-dbas/understanding...

loeg•5mo ago
We've observed FDP to make a surprisingly big difference in drive internal WA. If you can meaningfully tag different lifetime/stream data from your workloads, and you can expect hardware that supports it, it's very helpful. We some something like WAF reduction from ~1.60 to ~1.04 (on a synthetic but vaguely plausible workload).
jeffbee•5mo ago
With rocks, I assume?
loeg•5mo ago
Most of our writes (>99%) aren't rocks.
jeffbee•5mo ago
Hrmm. Still guessing about your workloads, but isn't it possible that workload A could cause a disproportionate amount of amplification, while still being much smaller in aggregate than workload B?
loeg•5mo ago
Seems possible, though we only incorporated FDP changes to the non-rocks writes and saw large WAF reduction. (We're approximately an object store as a service with rocksdb for metadata and direct disk writes for data; typical object size is hundreds of kB to single-digit MBs, and rocks updates are batched. Many of our users can accurately predict how long their objects will live, so we can segregate FDP streams by lifetime.)
kvemkon•5mo ago
> Vendors downplay the idiosyncrasies of specific SSD models by marketing their devices using four “headline” throughput metrics: sequential read, sequential write, random read, and random write.

For SOHO yes, where no serious database usage is expected. But server/datacenter SSDs are categorized: read-intensive, write-intensive and mixed-usage.

wtallis•5mo ago
You're conflating two different things here: the performance metrics that marketing provides, and the product segments that marketing groups products into.
p_ing•5mo ago
Gamers also fall into the read/write number trap. When tested, that type of workload performs just about the same from PCIe 3.0 through 5.0 due to the 4KiB often random access. And in some cases, there was only a minor delta between PCIe 5.0 NVMe and SATA SSD.

https://www.youtube.com/watch?v=gl8wXT8F3W4

antonkochubey•5mo ago
What games would load data in random 4KB chunks? Textures, sounds etc are in megabytes nowadays, 4K random reads are completely irrelevant.
p_ing•5mo ago
It doesn't matter how large the asset is, it matters what the method used to read the asset is.

Not every application will read in a specific size, but 4KiB isn't uncommon.

lmz•5mo ago
Those categories are usually derived from another advertised number: Drive Writes Per Day.

As an example in this Micron product brief the Latency for the read-intensive vs mixed use product are the same: https://assets.micron.com/adobe/assets/urn:aaid:aem:e71d9e5e...

Of course the footnote says that latency is a median at QD=1 random 4K IO.

From the paper the PM9A3 which is 1 DWPD has better P99.9 write latency under load vs the 7450 Pro (3 DWPD mixed use).

bayindirh•5mo ago
The best way to spec a storage system for any use case is to give baseline numbers for the desired benchmark (plus its parameters), and let the vendors do their tests in house and spec the system out to you.

If you can borrow systems, you can do it yourself, too.

Otherwise, there are too many variables to calculate now. In the past it was easier. Now it's much more complicated.

__turbobrew__•5mo ago
Something I learned the hard way is that SSD performance can nosedive if DISCARD/TRIM commands are not sent to the device. Up to 50% lower throughput on our Samsung DC drives.

Through metrics I noticed that some SSD in a cluster were much slower than others despite being uniform hardware. After a bit of investigation it was found that the slow devices had been in service longer, and we were mot sending DISCARDs to the SSDs due to a default in dm-crypt: https://wiki.archlinux.org/title/Dm-crypt/Specialties#Discar...

The performance penalty for our drives (Samsung DC drives) was around 50% if TRIM was never run. We now run blkdiscard when provisioning new drives and enable discards on the crypt devices and things seem to be much better now.

Reflecting a bit more, this makes me more bullish on system integrators like Oxide as I have seen so many times software which was misconfigured to not use the full potential of the hardware. There is a size of company between a one person shop and somewhere like facebook/google where they are running their own racks but they don’t have the in house expertise to triage and fix these performance issues. If for example you are getting 50% less performance out of your DB nodes, what is the cost of that inefficiency?

p_ing•5mo ago
While not the same issue, I took four 500GB Samsung 850 EVO drives and created a Storage Space out of them for Hyper-V VMs. Under any sort of load the volume would reach ~1 second latency. This was on a SAS controller in JBOD mode.

Switched to some Intel 480GB DC drives and performance was in the low milliseconds as I would have thought any drive should be.

Not sure if I was hitting the DRAM limit of the Samsungs or what, spent a bit of time t-shooting but this was a home lab and used Intel DCs were cheap on eBay. Granted, the Samsung EVOs weren't targeted to that type of work.

__turbobrew__•5mo ago
850 EVO is basically the lowest tier consumer device, from what I have read those devices can only handle short bursts of IOs and do not perform well under sustained load.
sitkack•5mo ago
Could be garbage collection pauses. You could try wiping them again with zeros or doing a drive specific reset and see if the performance is normative.
pkaye•5mo ago
The Samsung 850 EVO drives probably used an SLC write cache. A small portion of the NAND is configured to use as an SLC write buffer so they can handle a burst of writes faster and later move them the the MLC/TLC region. This is sufficient for typical consumer workloads.

Another thing you will notice is the 850 EVO is 500GB capacity while the Intel one is 480GB. The difference is capacity is put towards overprovisioning which reduces write amplification. The idea is if you have sufficient free space available, whole NAND blocks will naturally get invalidated before you run out of free blocks.

ADefenestrator•5mo ago
The Samsung consumer drives definitely don't do well under sustained high write workloads. The SLC cache fills up after a while, and write speeds drop drastically. The They also have some variety of internal head-of-line blocking type issue, where read latency goes way up when the writes saturate. I can't say I've ever seen 1s latency out of them, though.

Consumer drives can definitely have some quirks. The 2TB 960 Pro also just had weird write latency, even under relatively moderate load. Like 2-4ms instead of <1ms. It didn't really get much worse with extra load and concurrency, except that if there's writes enqueued the reads end up waiting behind them for some reason and also seeing the latency penalty.

They can also be weird behind RAID controllers, though I'm not sure if JBOD counts there. For whatever reason, the 860 EVO line wouldn't pass TRIM through the SAS RAID controller, but the 860 PRO would.

lathiat•5mo ago
The fun part is that for a bunch of SSD drives (especially older ones), sending discard/trim may also tank the performance. Due to firmware bugs.
loeg•5mo ago
You still might need to pace how fast you send discard/trim to modern drives, FWIW.