frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Closed-Loop Extracorporeal Vascular Cleaning by Staged Chemical Dissolution

https://zenodo.org/records/19763808
1•iliatoli•35s ago•0 comments

Officials underestimated impact of AI datacentres on UK carbon emissions

https://www.theguardian.com/technology/2026/apr/24/officials-hugely-underestimated-impact-of-ai-d...
1•Brajeshwar•2m ago•0 comments

Sloppy Copies

https://www.markround.com/blog/2026/04/19/sloppy-copies/
2•birdculture•3m ago•0 comments

Matter Devices Blog – Matterdevices.io

https://matterdevices.io/
1•alator21•7m ago•0 comments

Goldman Sachs leads record renminbi borrowing by US banks

https://www.ft.com/content/e83da9f6-f065-46ad-ad21-e677550b7b0c
1•t-3•9m ago•0 comments

Timothy Leary–1960s Acid Guru–May Have Been Among the CIA's Greatest Assets

https://covertactionmagazine.com/2026/04/24/timothy-leary-1960s-acid-guru-may-have-been-among-the...
1•t-3•10m ago•0 comments

"Game-changer" breast cancer study retracted as researcher out of his post

https://retractionwatch.com/2026/04/15/game-changer-breast-cancer-study-retracted-as-indiana-rese...
1•hentrep•12m ago•0 comments

EverAct – blockchain-verified rewards for walking, cycling, sustainable shopping

https://everact.app/en
1•save-the-planet•13m ago•1 comments

Echon – a Discord alternative I've been building solo

https://echon-voice.com
1•Phrosen•15m ago•0 comments

Lochat: See messages route across the world

https://chat.knowww.net/
1•johanam•15m ago•0 comments

An oral history of the Harvard Lampoon

https://www.washingtonpost.com/style/2026/04/25/harvard-lampoon-150th-conan-snl/
2•raldi•16m ago•0 comments

Reactivity in vanilla JavaScript – Observable Podcast [video]

https://www.youtube.com/watch?v=AUh5aJfafJM
1•recifs•17m ago•1 comments

Social media is no longer social

https://bsky.app/profile/pettertornberg.com/post/3mk64uzhdm22z
5•frereubu•19m ago•1 comments

SciRS2 – Scientific Computing and AI in Rust without C/C++/Fortran dependencies

https://github.com/cool-japan/scirs
1•mikolajw•20m ago•0 comments

Show HN: Chatforge – drag two local LLM conversations together to merge context

https://github.com/gerritsxd/chatforge
1•cyg2•23m ago•0 comments

Show HN: I built a free email security checker for small businesses

https://shielddeskhq.com/checker
1•minhajulmahib•23m ago•0 comments

ByteCode – A C2 Framework Windows Defender Safe

https://github.com/wadecalvin9/ByteCode
2•KIRA404•27m ago•1 comments

The reporters at this news site are AI bots. OpenAI's super PAC appears to be

https://modelrepublic.substack.com/p/the-reporters-at-this-news-site-are
5•CarbonCycles•27m ago•0 comments

Medical data of 500k UK volunteers listed for sale on Alibaba

https://www.malwarebytes.com/blog/news/2026/04/medical-data-of-500000-uk-volunteers-listed-for-sa...
4•salkahfi•28m ago•0 comments

A Collection of Chronic Medical Conditions Common in Autistic and ADHD Adults [pdf]

https://allbrainsbelong.org/wp-content/uploads/2023/08/CLINICIAN-GUIDE-Everything-is-Connected-to...
3•AndrewDucker•30m ago•0 comments

Tokyo Metropolitan Building Staff Cafeteria

https://www.atlasobscura.com/places/tokyo-metropolitan-building-staff-cafeteria
2•rawgabbit•30m ago•1 comments

Cursor: Agents.md not automatically injected due to bug

https://forum.cursor.com/t/agents-md-not-automatically-injected/158448
3•mopatches•31m ago•1 comments

Fight AI Slop and Fakery: Build and Distribute Your Own Trust Chain

https://blog.certisfy.com/2026/04/build-your-own-trust-chain.html
2•Edmond•32m ago•1 comments

Browser Based Constellation Modeling Tool

https://sixthsensor.io/
2•apoperi•33m ago•0 comments

Fully Featured Audio DSP Firmware for the Raspberry Pi Pico

https://github.com/WeebLabs/DSPi
3•BoingBoomTschak•36m ago•1 comments

EgoNet: A Peer-to-Peer Digital Existence System – In Homage to Satoshi Nakamoto

https://zenodo.org/records/19633431
2•R_Horiguchi•37m ago•0 comments

Autonomous weapons are a game-changer

https://www.economist.com/special-report/2018/01/25/autonomous-weapons-are-a-game-changer
3•andsoitis•39m ago•0 comments

In Search of the Missing Artist

https://www.abc.net.au/news/2026-04-25/in-search-of-the-missing-artist-jean-paul-mangin/106593220
2•colinprince•42m ago•0 comments

NASA Releases Powerful LAVA Software to US Aerospace Industry

https://www.nasa.gov/aeronautics/nasa-releases-powerful-lava-software-to-us-aerospace-industry/
2•happy-go-lucky•42m ago•0 comments

The Benchmark Gap: 1,472 runs show coding-agent context changes outcomes

https://github.com/dorukardahan/benchmark-gap
2•dorukardahan•42m ago•1 comments
Open in hackernews

Using Postgres pg_test_fsync tool for testing low latency writes

https://tanelpoder.com/posts/using-pg-test-fsync-for-testing-low-latency-writes/
40•mfiguiere•11mo ago

Comments

singron•11mo ago
Note that this workload is a worst case for iops and you will get higher iops in nearly any optimized workload. E.g. postgres needs to sync the WAL in order to commit (which does look like this test), but there are a ton of other writes that happen in parallel on the heap and index pages in addition to any reading you do. IME the consumer drives that benchmark at 500K iops and get only 500 iops on this test might get 10K or 20K iops on a more typical mixed workload.
tanelpoder•11mo ago
Throughput with enough I/O concurrency, yes. That's actually why I wrote this blog entry, just to bring attention to this - having nice IOPS numbers do not translate to nice individual I/O latency numbers. If an individual WAL write takes ~1.5 ms (instead of tens of microseconds), this means that your app transactions also take 1.5+ ms and not sub-millisecond. Not everyone cares about this (and often don't even need to care about this), but worth being aware of.

I tend to set up a small, but completely separate block device (usually on enterprise SAN storage or cloud block store) just for WAL/redo logs to have a different device with its own queue for that. So that when that big database checkpoint or fsync happens against datafiles, the thousands of concurrently submitted IO requests won't get in the way of WAL writes that still need to complete fast. I've done something similar in the past with separate filesystem journal devices too (for niche use cases...)

Edit: Another use case for this is that ZFS users can put the ZIL on low-latency devices, while keeping the main storage on lower cost devices.

natmaka•11mo ago
> I tend to set up a small, but completely separate block device (usually on enterprise SAN storage or cloud block store) just for WAL/redo logs

I'm not sure about this, as this separate device may handle more of the total (aggregated) work by being a member of an unique pool (RAID made of all available non-spare devices) used by the PostgreSQL server.

It seems to me that in most cases the most efficient setup, even when trying hard to reduce the maximal latency (and therefore to sacrifice some throughput), is an unique pool AND an adequate I/O scheduling enforcing a "max latency" parameter.

If, during peaks of activity, your WAL-dedicated device isn't permanently at 100% usage while the data pool is, then dedicating it may (overall) bump up the max latency and reduce throughput.

Tweaking some parameters (bgwriter, full_page_writes, wal_compression, wal_writer_delay, max_wal_senders, wal_level, wal_buffers, wal_init_zero...) with respect to the usage profile (max tolerated latency, OLTP, OLAP, proportion of SELECTs and INSERTs/UPDATEs, I/O subsystem characteristics and performance, kernel parameters...) is key.

tanelpoder•11mo ago
When doing 1M+ IOPS, you probably do not want to use OS IO schedulers due to the OS (timer & spinlock) overhead [1] and let the hardware take care of any scheduling in their device queues. But you're right about flattening the IO burst spikes via DB configuration, so that you'd have constant slow checkpointing going on, instead of a huge spike every 15 minutes...

All this depends on what kind of storage backend you're on, local consumer SSDs with just one NVMe namespace each, or local SSDs with multiple namespaces (with their own queues) or a full-blown enterprise storage backend where you have no idea what's really going in the backend :-)

[1]: https://tanelpoder.com/posts/11m-iops-with-10-ssds-on-amd-th...

Edit: Note that I wasn't proposing using an entire physical disk device (or multiple) for the low latency files, but just a part of it. Local enterprise-grade SSDs support multiple namespaces (with their own internal queues) so you can carve out just 1% of that for separate I/O processing. And with enterprise SAN arrays (or cloud elastic block store offerings) this works too, you don't know how many physical disks are involved in the backend anyway, but at your host OS level, you get a separate IO queue that is not gonna be full of thousands of checkpoint writes.

fendale•11mo ago
> local enterprise-grade SSDs support multiple namespaces (with their own internal queues)

What do you mean by namespaces here? Are they created by having different partitions or LVM volumes? As you mentioned consumer grade SSDs only have a single namespace, I am guessing this is something that needs some config when mounting the drive?

tanelpoder•11mo ago
With SSDs that support namespaces you can use commands like "nvme create-ns" to create logical "partitioning" of the underlying device, so you'll end up with device names like this (also in my blog above):

/dev/nvme0n1 /dev/nvme0n2 /dev/nvme0n3 ...

Consumer disks support only a single namespace, as far as I've seen. Different namespaces give you extra flexibility, I think some even support different sector sizes for different namespaces).

So under the hood you'd still be using the same NAND storage, but the controller can now process incoming I/Os with awareness of which "logical device" they came from. So, even if your data volume has managed to submit a burst of 1000 in-flight I/O requests via its namespace, the controller can still pick some latest I/Os from other (redo volume) namespaces to be served as well (without having to serve the other burst of I/Os first).

So, you can create a high-priority queue by using multiple namespaces on the same device. It's like logical partitioning of the SSD device I/O handling capability, not physical partitioning of disk space like the OS "fdisk" level partitioning would be. The OS "fdisk" partitioning or LVM mapping is not related to NVMe namespaces at all.

Also, I'm not a NVMe SSD expert, but this is my understanding and my test results agree so far.

fendale•11mo ago
Ah ok - so googling a bit on this, you do specify the size when creating the namespace. So if you have multiple namespaces, they appear as separate devices on the OS, and then you can mkfs and mount each as if its a different disk. Then you get the different IO queues at the hardware level, unlike with traditional partitioning.
tanelpoder•11mo ago
Yep, exactly - with OS level partitioning or logical volumes, you'd still end up with a single underlying block device (and a single queue) at the end of the day.
CodesInChaos•11mo ago
Could you run this on network block storage like EBS? I assume those have pretty high latency as well, even with high IOPS volumes?
tanelpoder•11mo ago
I'll see if I have a chance to run such a test on AWS in coming days (and would need to keep running it for much longer than just 5 seconds shown in the blog).

If you care about WAL write/commit latency, you could provision a small-ish EBS io2 Block Express device (with provisioned IOPS) just for your WAL files and the rest of your data can still reside on cheaper EBS storage. And you might not even need to hugely overprovision your WAL device IOPS (as databases can batch commit writes for multiple transactions).

But the main point is that once your WAL files are on a completely separate blockdevice from all the other datafile I/O, they won't suffer from various read & write IO bursts that can happen during regular database activity. On Oracle databases, I put controlfiles to these separate devices too, as they are on the critical path during redo log switches...

CodesInChaos•11mo ago
How do you backup those split volumes? I like EBS volume snapshots, since they're atomic, incremental, fast to restore and make it easy to spin up a clone. But obviously that approach won't work for split volumes.
tanelpoder•11mo ago
Yep indeed, that's a tradeoff, if your DB fits into a single volume. I'm not that deeply familiar with databases other than Oracle (that has its own ways to work work around this), so for ease of use, everything on a single volume keeps things simpler.

One thing that I try to achieve anyway, is to spread & "smoothen" the database checkpoint & fsync activity over time via database checkpointing parameters, so you won't have huge "IO storms" every 15 minutes, but just steady writing of dirty buffers going on all the time. So, even if all your files are stored on the same blockdevice, you'll less likely see a case where your WAL writes wait behind 50000 checkpoint write requests issued just before.

CoolCold•11mo ago
well, likely you can just use those as ZFS with ZIL or in more tradional setup with LVM + LVW writeback cache - which from my experience greatly improves latency
dboreham•11mo ago
Aha! Useful article because I somehow never knew about pg_test_fsync until now. I wrote and maintained a similar tool long ago before open source times, when I worked on database storage engines. Several times since I've wished I still had that tool. Now I do. Excellent.
CoolCold•11mo ago
fio, used like

    fio --rw=write --ioengine=sync --fdatasync=1 --directory=test-data --size=22m --bs=2300 --name=mytest
is very good approximation here on "real" iops - note on "fdatasync" - it easily finds out the consumer SSD 500 iops vs DC SSD with 30000 iops
anarazel•11mo ago
The numbers in the post highlight an issue I have with Samsung consumer SSDs - they've slowed down FUA writes to an absurd degree.

        open_datasync                       249.578 ops/sec    4007 usecs/op
        fdatasync                           608.573 ops/sec    1643 usecs/op
open_datasync (i.e. O_DSYNC) ends up as FUA writes, fdatasync() as a plain write followed by a cache flush.

On just about anything else a single FUA write is either the same speed as a write + fdatasync, or considerably faster.

This is pretty annoying, as using O_DSYNC is a lot more suitable for concurrent WAL writes, but because Samsung SSDs are widespread, changing the default would regress performance substantially for a good number of users.