frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Env-shelf – Open-source desktop app to manage .env files

https://env-shelf.vercel.app/
1•ivanglpz•1m ago•0 comments

Show HN: Almostnode – Run Node.js, Next.js, and Express in the Browser

https://almostnode.dev/
1•PetrBrzyBrzek•1m ago•0 comments

Dell support (and hardware) is so bad, I almost sued them

https://blog.joshattic.us/posts/2026-02-07-dell-support-lawsuit
1•radeeyate•2m ago•0 comments

Project Pterodactyl: Incremental Architecture

https://www.jonmsterling.com/01K7/
1•matt_d•2m ago•0 comments

Styling: Search-Text and Other Highlight-Y Pseudo-Elements

https://css-tricks.com/how-to-style-the-new-search-text-and-other-highlight-pseudo-elements/
1•blenderob•4m ago•0 comments

Crypto firm accidentally sends $40B in Bitcoin to users

https://finance.yahoo.com/news/crypto-firm-accidentally-sends-40-055054321.html
1•CommonGuy•5m ago•0 comments

Magnetic fields can change carbon diffusion in steel

https://www.sciencedaily.com/releases/2026/01/260125083427.htm
1•fanf2•5m ago•0 comments

Fantasy football that celebrates great games

https://www.silvestar.codes/articles/ultigamemate/
1•blenderob•5m ago•0 comments

Show HN: Animalese

https://animalese.barcoloudly.com/
1•noreplica•6m ago•0 comments

StrongDM's AI team build serious software without even looking at the code

https://simonwillison.net/2026/Feb/7/software-factory/
1•simonw•6m ago•0 comments

John Haugeland on the failure of micro-worlds

https://blog.plover.com/tech/gpt/micro-worlds.html
1•blenderob•7m ago•0 comments

Show HN: Velocity - Free/Cheaper Linear Clone but with MCP for agents

https://velocity.quest
2•kevinelliott•7m ago•1 comments

Corning Invented a New Fiber-Optic Cable for AI and Landed a $6B Meta Deal [video]

https://www.youtube.com/watch?v=Y3KLbc5DlRs
1•ksec•9m ago•0 comments

Show HN: XAPIs.dev – Twitter API Alternative at 90% Lower Cost

https://xapis.dev
1•nmfccodes•9m ago•0 comments

Near-Instantly Aborting the Worst Pain Imaginable with Psychedelics

https://psychotechnology.substack.com/p/near-instantly-aborting-the-worst
2•eatitraw•15m ago•0 comments

Show HN: Nginx-defender – realtime abuse blocking for Nginx

https://github.com/Anipaleja/nginx-defender
2•anipaleja•16m ago•0 comments

The Super Sharp Blade

https://netzhansa.com/the-super-sharp-blade/
1•robin_reala•17m ago•0 comments

Smart Homes Are Terrible

https://www.theatlantic.com/ideas/2026/02/smart-homes-technology/685867/
1•tusslewake•19m ago•0 comments

What I haven't figured out

https://macwright.com/2026/01/29/what-i-havent-figured-out
1•stevekrouse•19m ago•0 comments

KPMG pressed its auditor to pass on AI cost savings

https://www.irishtimes.com/business/2026/02/06/kpmg-pressed-its-auditor-to-pass-on-ai-cost-savings/
1•cainxinth•19m ago•0 comments

Open-source Claude skill that optimizes Hinge profiles. Pretty well.

https://twitter.com/b1rdmania/status/2020155122181869666
3•birdmania•20m ago•1 comments

First Proof

https://arxiv.org/abs/2602.05192
4•samasblack•22m ago•2 comments

I squeezed a BERT sentiment analyzer into 1GB RAM on a $5 VPS

https://mohammedeabdelaziz.github.io/articles/trendscope-market-scanner
1•mohammede•23m ago•0 comments

Kagi Translate

https://translate.kagi.com
2•microflash•24m ago•0 comments

Building Interactive C/C++ workflows in Jupyter through Clang-REPL [video]

https://fosdem.org/2026/schedule/event/QX3RPH-building_interactive_cc_workflows_in_jupyter_throug...
1•stabbles•25m ago•0 comments

Tactical tornado is the new default

https://olano.dev/blog/tactical-tornado/
2•facundo_olano•26m ago•0 comments

Full-Circle Test-Driven Firmware Development with OpenClaw

https://blog.adafruit.com/2026/02/07/full-circle-test-driven-firmware-development-with-openclaw/
1•ptorrone•27m ago•0 comments

Automating Myself Out of My Job – Part 2

https://blog.dsa.club/automation-series/automating-myself-out-of-my-job-part-2/
1•funnyfoobar•27m ago•1 comments

Dependency Resolution Methods

https://nesbitt.io/2026/02/06/dependency-resolution-methods.html
1•zdw•28m ago•0 comments

Crypto firm apologises for sending Bitcoin users $40B by mistake

https://www.msn.com/en-ie/money/other/crypto-firm-apologises-for-sending-bitcoin-users-40-billion...
1•Someone•28m ago•0 comments
Open in hackernews

Reduce bandwidth costs with dm-cache: fast local SSD caching for network storage

https://devcenter.upsun.com/posts/cut-aws-bandwidth-costs-95-with-dm-cache/
85•tlar•5mo ago

Comments

AtlasBarfed•4mo ago
"When deploying infrastructure across multiple AWS availability zones (AZs), bandwidth costs can become a significant operational expense"

An expense in the age of 100gbit networking that is entirely because AWS can get away with charging the suckers, um, customers for it

0xbadcafebee•4mo ago
AZs are whole datacenters, so I imagine their backbone bandwidth between AZs is a fraction of total bandwidth inside the DC. If they didn't charge it'd probably get saturated and then there's not much point in using them for reliability.

The internet egress price is where they're bastards.

martinald•4mo ago
Definitely not. Azure doesn't charge for intra region costs FWIW.

Getting terabits and terabits of 'private' interconnect is unbelievably cheap at amazon scale. AWS even own some of their own cables and have plans to build more.

There is _so_ much capacity available on fiber links. For example one newish (Anjana) cable between the US and Europe has 480Tbit/sec capacity. That's just one cable. And that could probably be upgraded to 10-20x that already with newer modulation techniques.

random3•4mo ago
reduce network bandwidth from the network attaches SSD volumes, yes?
0xbadcafebee•4mo ago
This is good timing; I was just looking at a use-case where we need more iops and the only immediate solutions involve allocating way more high-performance disks or network storage. The problem with a cache is having a large dataset with random access, so repeated cache hits might not be frequent. But I had a theory that you could still make an impact on performance and lower your storage performance requirements. I may try this out, but it is block-level, so it's a bit intrusive.

Another option I haven't tried is tmpfs with an overlay. Initial access is RAM, falls back to underlying slower storage. Since I'm mostly doing reads, should be fine, writes can go to the slower disk mount. No block storage changes needed.

otterley•4mo ago
You don’t need a tmpfs to have the OS use memory to cache block reads for you. The kernel gives you that for free.
kayson•4mo ago
I was looking into SSD caching recently and decided to go with Open-CAS instead, which should be more performant (didn't test it personally): https://github.com/Open-CAS/open-cas-linux/issues/1221

It's maintained by Intel and Huawei and the devs were very responsive.

mgerdts•4mo ago
Is Intel still working on it? Open-CAS bdev support was nearly removed from SPDK at a time when Intel still employed a SPDK development and QA team. Huawei stepped in to offer support to keep it alive, preventing its removal.

I’ve been under the impression that Intel got rid of pretty much all of their storage software employees.

quickslowdown•4mo ago
I mean to ask a genuine, good faith question here, because I don't know much about Huawei's development team.

My head goes to the xz attack when I hear that Intel decided to stop supporting an open source tool, and a Chinese company known to sell backdoored equipment "steps in" to continue development, and it makes me suspicious & concerned.

This is to say nothing of the quality of the software they write or its functionality, they may be "good stewards" of it, but does it seem paranoid to be unsure of that arrangement?

rbranson•4mo ago
> For e-commerce workloads, the performance benefit of write-back mode isn’t worth the data integrity risk. Our customers depend on transactional consistency, and write-through mode ensures every write operation is safely committed to our replicated Ceph storage before the application considers it complete.

Unless the writer is always overwriting entire files at once blindly (doesn't read-then-write), consistency requires consistency reads AND writes. Even then, potential ordering issues creep in. It would be really interesting to hear how they deal with it.

twotwotwo•4mo ago
They mention it as a block device, and the diagram makes it look like there's one reader. If so, this seems like it has the same function as the page cache in RAM, just saving reads, and looks a lot like https://discord.com/blog/how-discord-supercharges-network-di... (which mentions dm-cache too).

If so, safe enough, though if they're going to do that, why stop at 512MB? The big win of Flash would be that you could go much bigger.

mrkurt•4mo ago
dm-cache writeback mode is both amazing and terrifying. It reorders writes, so not only do you lose data if the cache fails, you probably just corrupted the entire backing disk.
saltcured•4mo ago
Yeah, when I used it on a workstation many years ago, I layered it on top of an MD RAID-1 SSD array for the cache and an MD RAID-5 HDD array for the bulk store.

I used writeback mode, but expected to wipe the machine if the caching layer ever collapsed. In the end, the SSDs outlived my interest in the machine, though I think I did failover an HDD or two while the rest remained in normal operating mode.

namibj•4mo ago
Wow, meanwhile it'd be so easy to just take cache flush commands as "only" reordering barriers without breaking the single-system consistency (don't use it for a backing store of a Raft/PAXOS cluster, though!).
loeg•4mo ago
Historically, I believe bcache offered a better design than dm-cache. I wonder if that has changed at all?

That said, for this use, I would be very concerned about coherency issues putting any cache in front of the actual distributed filesystem. (Unless this is the only node doing writes, I guess?)

mgerdts•4mo ago
I remember seeing another strategy where a remote block device was (lazily?) mirrored to a local SSD. The mirror was configured such that reads from the local device were preferred and writes would go to both devices. I think this was done by someone on GCP.

Does this ring any bells? I’ve searched for this a time or two and can’t find it again.

cperciva•4mo ago
I've done this on EC2 -- in particular back in the days when EBS billed per I/O (as opposed to using a "reserved IOPs" model where you say up front how much I/O performance you need). I haven't bothered recently since EBS performance is good enough for most purposes and there's no automatic cost savings.
twotwotwo•4mo ago
Discord: https://discord.com/blog/how-discord-supercharges-network-di...

(Somehow the name "SuperDisks" was burned into my brain for this. Although Discord's post does use 'Super-Disks' in a section header, if you search the Internet for SuperDisks you'll everything's about the LS-120 floppies that went by that name.)

Conch5606•4mo ago
This is not quite the same, it's for migrating from one device to another while keeping the file system writable, but it's quite neat: dm-clone[1]

I've used it before for a low downtime migration of VMs between two machines - it was a personal project and I could have just kept the VM offline for the migration, but it was fun to play around with it.

You give it a read-only backing device and a writable device that's at least as big. It will slowly copy the data from the read-only device to the writable device. If a read is issued to the dm-clone target it's either gotten from the writable device if it's already cloned or forwarded to the read-only device. Writes are always going to the writable device and afterwards the read-only device is ignored for that block.

It's not the fastest, but it's relatively easy to set up, even though using device mapper directly is a bit clunky. It's also not super efficient, IIRC if a read goes to a chunk that hasn't been copied yet, that's used to give the data to the reading program, but it's not stored on the writable device, so it has to be fetched again. If the file system being copied isn't full, it's a good idea to run trimming after creating the dm-clone target as discarded blocks are marked as not needing to be fetched.

[1] https://docs.kernel.org/admin-guide/device-mapper/dm-clone.h...

zipmapfoldright•4mo ago
Google's L4 cache? https://cloud.google.com/blog/products/storage-data-transfer...
magicalhippo•4mo ago
There was some discussion amongst the ZFS devs for such a feature.

As I recall it was to change the current mirrored read strategy to be aware of the speed of the underlying devices, and prefer the faster if it has capacity. Though perhaps a fixed pool property to always read from a given device was discussed, it's been a while so my memory is hazy.

The use-case was similar IIRC, where a customer wanted to combine local SSD with remote block device.

So, might come to ZFS.

kosolam•4mo ago
Hmm.. I have a few questions:

1. How is the cache invalidated to avoid reading stale data? 2. If multi az setup is for high availability then I guess the only traffic between zones must be replication from the active one to the standby zones, in such a setup read cache doesn’t make much sense..

otterley•4mo ago
Why is two-thirds of their I/O crossing AZ boundaries for a read-heavy application? This application seems like it’s not well architected for AWS and puts them at availability risk in the event of a zonal impairment. It looks like they’re using Ceph instead of EBS, and it’s not clear why.
miladyincontrol•4mo ago
I just use fs-cache for networked storage caching. Good enough for redhat, good enough for me. Unsure how performance compares but I like that it works transparently with little more than a mount flag to activate, works fine in containers, and if managed with cachefilesd it can scale dynamically as per configured quotas.

For local disks though? bcache