frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: MyraOS – My 32-bit operating system in C and ASM (Hack Club project)

https://github.com/dvir-biton/MyraOS
115•dvirbt•6h ago•10 comments

How I turned Zig into my favorite language to write network programs in

https://lalinsky.com/2025/10/26/zio-async-io-for-zig.html
44•0x1997•3h ago•6 comments

Sandhill cranes have adopted a Canada gosling

https://www.smithsonianmag.com/science-nature/these-sandhill-cranes-have-adopted-a-canadian-gosli...
47•NaOH•4d ago•7 comments

Are-we-fast-yet implementations in Oberon, C++, C, Pascal, Micron and Luon

https://github.com/rochus-keller/Are-we-fast-yet
35•luismedel•4h ago•7 comments

We Saved $500k per Year by Rolling Our Own "S3"

https://engineering.nanit.com/how-we-saved-500-000-per-year-by-rolling-our-own-s3-6caec1ee1143
71•mpweiher•6h ago•41 comments

A definition of AGI

https://arxiv.org/abs/2510.18212
165•pegasus•9h ago•261 comments

You already have a Git server

https://maurycyz.com/misc/easy_git/
417•chmaynard•16h ago•329 comments

Sphere Computer – The Innovative 1970s Computer Company Everyone Forgot

https://sphere.computer/
30•ChrisArchitect•3d ago•2 comments

Ken Thompson recalls Unix's rowdy, lock-picking origins

https://thenewstack.io/ken-thompson-recalls-unixs-rowdy-lock-picking-origins/
99•dxs•10h ago•7 comments

Termite farmers fine-tune their weed control

https://arstechnica.com/science/2025/10/termite-farmers-fine-tune-their-weed-control/
8•PaulHoule•5d ago•2 comments

NORAD’s Cheyenne Mountain Combat Center, c.1966

https://flashbak.com/norad-cheyenne-mountain-combat-center-478804/
91•zdw•6d ago•47 comments

Argentina's midterm election hands landslide win to Milei's libertarian overhaul

https://www.cnbc.com/2025/10/27/argentinas-midterm-election-hands-landslide-win-to-mileis-liberta...
19•gnabgib•1h ago•7 comments

Microsoft 365 Copilot – Arbitrary Data Exfiltration via Mermaid Diagrams

https://www.adamlogue.com/microsoft-365-copilot-arbitrary-data-exfiltration-via-mermaid-diagrams-...
144•gnabgib•4h ago•25 comments

A bug that taught me more about PyTorch than years of using it

https://elanapearl.github.io/blog/2025/the-bug-that-taught-me-pytorch/
344•bblcla•3d ago•69 comments

ICE Will Use AI to Surveil Social Media

https://jacobin.com/2025/10/ice-zignal-surveillance-social-media
94•throwaway81523•2h ago•100 comments

System.LongBool

https://docwiki.embarcadero.com/Libraries/Sydney/en/System.LongBool
37•surprisetalk•5d ago•34 comments

Poison, Poison Everywhere

https://loeber.substack.com/p/29-poison-poison-everywhere
122•dividendpayee•5h ago•57 comments

Show HN: Helium Browser for Android with extensions support, based on Vanadium

https://github.com/jqssun/android-helium-browser
31•jqssun•5h ago•12 comments

Researchers demonstrate centimetre-level positioning using smartwatches

https://www.otago.ac.nz/news/newsroom/researchers-demonstrate-centimetre-level-positioning-using-...
31•geox•1w ago•8 comments

Asbestosis

https://diamondgeezer.blogspot.com/2025/10/asbestosis.html
231•zeristor•19h ago•167 comments

A Looking Glass Half Empty, Part 2: A Series of Unfortunate Events

https://www.filfre.net/2025/10/a-looking-glass-half-empty-part-2-a-series-of-unfortunate-events/
3•ibobev•6d ago•0 comments

Wren: A classy little scripting language

https://wren.io/
129•Lyngbakr•4d ago•39 comments

Making the Electron Microscope

https://www.asimov.press/p/electron-microscope
65•mailyk•10h ago•8 comments

Feed the bots

https://maurycyz.com/misc/the_cost_of_trash/
163•chmaynard•15h ago•126 comments

Eavesdropping on Internal Networks via Unencrypted Satellites

https://satcom.sysnet.ucsd.edu/
187•Bogdanp•6d ago•29 comments

Pico-Banana-400k

https://github.com/apple/pico-banana-400k
365•dvrp•1d ago•60 comments

Books by People – Defending Organic Literature in an AI World

https://booksbypeople.org/
53•ChrisArchitect•10h ago•62 comments

Alzheimer's disrupts circadian rhythms of plaque-clearing brain cells

https://medicine.washu.edu/news/alzheimers-disrupts-circadian-rhythms-of-plaque-clearing-brain-ce...
169•gmays•10h ago•29 comments

Downloadable movie posters from the 40s, 50s, 60s, and 70s

https://hrc.contentdm.oclc.org/digital/collection/p15878coll84/search
417•bookofjoe•1w ago•83 comments

Formal Reasoning [pdf]

https://cs.ru.nl/~freek/courses/fr-2025/public/fr.pdf
127•Thom2503•15h ago•27 comments
Open in hackernews

We Saved $500k per Year by Rolling Our Own "S3"

https://engineering.nanit.com/how-we-saved-500-000-per-year-by-rolling-our-own-s3-6caec1ee1143
70•mpweiher•6h ago

Comments

ch2026•2h ago
Who is “The South Korean Government”?
OsrsNeedsf2P•1h ago
It's the government who lost 850TB of citizen data with no backups[0] Because Cloud bad.

[0] https://www.techradar.com/pro/security/the-south-korean-gove...

codedokode•54m ago
Storing the data in a foreign cloud would allow foreign nation to play funny tricks on the country. What they need is not the cloud but sane backup system.
PartiallyTyped•40m ago
Isolated partitions exist.
senectus1•42m ago
because they didnt have a decent backup.
Havoc•2h ago
Tbh I feel this in one of those that would be significantly cleaner without serverless in first place.

Sticking something with 2 second lifespan on disk to shoehorn it into aws serverless paradigm created problems and cost out of thin air here

Good solution moving at least partially to a in memory solution though

tcdent•1h ago
Yeah, so now you're basically running a heavy instance in order to get the network throughput and the RAM, but not really using that much CPU when you could probably handle the encode with the available headroom. Although the article lists TLS handshakes as being a significant source of CPU usage, I must be missing something because I don't see how that is anywhere near the top of the constraints of a system like this.

Regardless, I enjoyed the article and I appreciate that people are still finding ways to build systems tailored to their workflows.

inlined•1h ago
Maybe they’re not using keepalives in their clients causing thousands of handshakes per second?
none2585•2h ago
I'm curious how many engineers per year this costs to maintain
UseofWeapons1•1h ago
Yes, that was my thought as well. Breakeven might be like 1 (give or take 2x)?
hinkley•1h ago
Anything worth doing needs three people. Even if they also are used for other things.
codedokode•1h ago
And I am curious how many engineer years it requires to port code to cloud services and deal with multiple issues you cannot even debug due to not having root privileges in the cloud.

Without cloud, saving a file is as simple as "with open(...) as f: f.write(data)" + adding a record to DB. And no weird network issues to debug.

rajamaka•1h ago
> as simple as "with open(...) as f: f.write(data)"

Save where? With what redundancy? With what access policies? With what backup strategy? With what network topology? With what storage equipment and file system and HVAC system and...

Without on-prem, saving a file is as simple as s3.put_object() !

codedokode•1h ago
With s3, you cannot use ls, grep and other tools.

> Save where? With what redundancy? With what access policies? With what backup strategy? With what network topology? With what storage equipment and file system and HVAC system and...

Wow that's a lot to learn before using s3... I wonder how much it costs in salaries.

> With what network topology?

You don't need to care about this when using SSDs/HDDs.

> With what access policies?

Whichever is defined in your code, no restrictions unlike in S3. No need to study complicated AWS documentation and navigate through multiple consoles (this also costs you salaries by the way). No risk of leaking files due to misconfigured cloud services.

> With what backup strategy?

Automatically backed up with rest of your server data, no need to spend time on this.

coderintherye•1h ago
I mean you can easily mount the S3 bucket to the local filesystem (e.g. using s3fs-fuse) and then use standard command line tools such as ls and grep.
codedokode•1h ago
It's probably going to be dog slow. I dealt with HDDs where just iterating through all files and directories takes hours, and network storage is going to be even slower at this scale.
hallman76•53m ago
I inherited an S3 bucket where hundreds of thousands of files were written to the bucket root. Every filename was just a uuid. ls might work after waiting to page though to get every file. To grep you would need to download 5 TB.
rajamaka•1h ago
> You don't need to care about this when using SSDs/HDDs.

You do need to care when you move beyond a single server in a closet that runs your database, webserver and storage.

> No risk of leaking files due to misconfigured cloud services.

One misconfigured .htaccess file for example, could result in leaking files.

Nextgrid•3m ago
With bare-metal machines you can go very far before needing to scale beyond one machine.
inlined•1h ago
It sounds like you’re not at the scale where cloud storage is obviously useful. By the time you definitely need S3/GCS you have problems making sure files are accessible everywhere. “Grep” is a ludicrous proposition against large blob stores
bcrosby95•1h ago
You can't ever definitively answer most of those questions on someone else's cloud. You just take Amazons word for whatever number of nines they claim it has.
rajamaka•1h ago
Not needing to ask the questions is the selling point.
grebc•3m ago
Bro were you off grid last week. Your questions equally apply to AWS, you just magically handwave away all those questions as if AWS/GCP/Azure outages aren’t a thing.
Rohansi•1h ago
I don't think any of those mattered for their use case. That's why they didn't actually need S3.
AdieuToLogic•49m ago
>> Without cloud, saving a file is as simple as "with open(...) as f: f.write(data)" + adding a record to DB.

> Save where? With what redundancy? With what access policies? With what backup strategy? With what network topology? With what storage equipment and file system and HVAC system and...

Most of these concerns can be addressed with ZFS[0] provided by FreeBSD systems hosted in triple-A data centers.

See also iSCSI[1].

0 - https://docs.freebsd.org/en/books/handbook/zfs/

1 - https://en.wikipedia.org/wiki/ISCSI

hinkley•1h ago
Variation on an old classic.

Question: How do you save a small fortune in cloud savings?

Answer: First start with a large fortune.

RedShift1•39m ago
Ah that is where logging and traceability comes in! But not to worry, the cloud has excellent tools for that! The fact that logging and tracing will become half your cloud cost, oh well let's just sweep that under the rug.
nbngeorcjhe•1h ago
A small fraction of 1, probably? It sounds like a fairly simple service that shouldn't require much ongoing development
hinkley•1h ago
You're going to run a production system with a bus number of 1?

I think you mean a small fraction of 3 engineers. And small fractions aren't that small.

adrianN•18m ago
So far I have seen a lot more production systems with a bus factor of zero than production systems with a bus factor greater one.
codedokode•1h ago
Especially if you have access to LLMs.
codedokode•1h ago
What I notice, that large companies use their own private cloud and datacenters. At their scale, it is cheaper to have their own storage. As a side business, they also sell cloud services themselves. And small companies probably don't have that much data to justify paying for a cloud instead of buying several SSDs/HDDs or creating SMB share on their Windows server.
CaptainOfCoit•1h ago
> I'm curious how many engineers per year this costs to maintain

The end of the article has this:

> Consider custom infrastructure when you have both: sufficient scale for meaningful cost savings, and specific constraints that enable a simple solution. The engineering effort to build and maintain your system must be less than the infrastructure costs it eliminates. In our case, specific requirements (ephemeral storage, loss tolerance, S3 fallback) let us build something simple enough that maintenance costs stay low. Without both factors, stick with managed services.

Seems they were well aware of the tradeoffs.

elchananHaas•2h ago
Video processing is one of those things that need caution when doing serverlessly. This solution makes sense, especially because S3s durability guarantees aren't needed.
VladVladikoff•44m ago
I’m mostly just impressed that some janky baby monitor has racked up server fees on this scale. Amazing example of absolutely horrible engineering.

Also, just take an old phone from your drawer full of old phones, slap some free camera app on it, zip tie a car phone mount to the crib, and boom you have a free baby monitor.

bombcar•37m ago
If you don’t have fifty to a hundred dodgy PoE cameras from Alibaba tied to the crib do you even really love the baby?
Huxley1•37m ago
S3 certainly saves a lot of hassle, but in certain use cases, it really is prohibitively expensive. Has anyone tried self-hosted alternatives like MinIO or SeaweedFS? Or taken even more radical approaches? How do you balance between stability, maintenance overhead, and cost savings?
ddxv•32m ago
MinIO has moved away from having a free community fork, and I think it's base cost is close to $100k a year. I've been using Garage and been happy, but as a single dev and orders of magnitude smaller than the OP, so there are certainly edge cases I'm missing to compare the two.
Lucian6•24m ago
Having gone through S3 cost optimization ourselves, I want to share some important nuances around this approach. While the raw storage costs can look attractive, there are hidden operational costs to consider:

We found that implementing proper data durability (3+ replicas, corruption detection, automatic repair) added ~40% overhead to our initial estimates. The engineering time spent building and maintaining custom tooling for multi-region replication, access controls, and monitoring ended up being substantial - about 1.5 FTE over 18 months.

For high-throughput workloads (>500 req/s), we actually saw better cost efficiency with S3 due to their economies of scale on bandwidth. The breakeven point seems to be around 100-200TB of relatively static data with predictable access patterns. Below that, the operational overhead of running your own storage likely exceeds S3's markup.

The key is to be really honest about your use case. Are you truly at scale? Do you have the engineering resources to build AND maintain this long-term? Sometimes paying the AWS premium is worth it for the operational simplicity.

YZF•5m ago
Right. Having worked on a commercial S3 compatible storage I can tell y'all that there's a lot more to it then just sticking some files on JBOD. It does depend on your specific requirements though. 1.5 FTE over 18 months sounds on the low side for everything you've described.

That said the article seems to be more about an optimization of their pipeline to reduce the S3 usage by holding some objects in memory instead. That's very different than trying to build your own object store to replace S3.

dxxvi•13m ago
So, you want a place to store many files in a short period of time and when there's a new file, somebody must be notified?

Have you ever thought of using a postgresql db (also on aws) to store those files and use CDC to publish messages about those files to a kafka topic? In your original way, we need 3 aws services: s3, lambda and sqs. With this way, we need 2: postgresql and kafka. I'm not sure how well this method works though :-)