frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Pebble Watch software is now open source

https://ericmigi.com/blog/pebble-watch-software-is-now-100percent-open-source
1095•Larrikin•19h ago•201 comments

Most Stable Raspberry Pi? Better NTP with Thermal Management

https://austinsnerdythings.com/2025/11/24/worlds-most-stable-raspberry-pi-81-better-ntp-with-ther...
196•todsacerdoti•7h ago•61 comments

Making Crash Bandicoot (2011)

https://all-things-andy-gavin.com/video-games/making-crash/
25•davikr•2h ago•3 comments

Unpowered SSDs slowly lose data

https://www.xda-developers.com/your-unpowered-ssd-is-slowly-losing-your-data/
571•amichail•18h ago•239 comments

Human brains are preconfigured with instructions for understanding the world

https://news.ucsc.edu/2025/11/sharf-preconfigured-brain/
198•XzetaU8•7h ago•129 comments

Using an Array of Needles to Create Solid Knitted Shapes

https://dl.acm.org/doi/10.1145/3746059.3747759
40•PaulHoule•3d ago•8 comments

Nearby peer discovery without GPS using environmental fingerprints

https://www.svendewaerhert.com/blog/nearby-peer-discovery/
11•waerhert•4d ago•3 comments

Claude Advanced Tool Use

https://www.anthropic.com/engineering/advanced-tool-use
556•lebovic•18h ago•232 comments

Broccoli Man, Remastered

https://mbleigh.dev/posts/broccoli-man-remastered/
55•mbleigh•5d ago•15 comments

Meta Segment Anything Model 3

https://ai.meta.com/blog/segment-anything-model-3/?_fb_noscript=1
52•alcinos•5d ago•12 comments

Show HN: I built an interactive HN Simulator

https://news.ysimulator.run/news
399•johnsillings•20h ago•181 comments

Rethinking C++: Architecture, Concepts, and Responsibility

https://blogs.embarcadero.com/rethinking-c-architecture-concepts-and-responsibility/
38•timeoperator•5d ago•31 comments

Cool-retro-term: terminal emulator which mimics look and feel of CRTs

https://github.com/Swordfish90/cool-retro-term
259•michalpleban•20h ago•95 comments

Implications of AI to schools

https://twitter.com/karpathy/status/1993010584175141038
270•bilsbie•20h ago•303 comments

Dumb Ways to Die: Printed Ephemera

https://ilovetypography.com/2025/11/19/dumb-ways-to-die-printed-ephemera/
28•jjgreen•5d ago•19 comments

Build a Compiler in Five Projects

https://kmicinski.com/functional-programming/2025/11/23/build-a-language/
147•azhenley•1d ago•24 comments

What OpenAI did when ChatGPT users lost touch with reality

https://www.nytimes.com/2025/11/23/technology/openai-chatgpt-users-risks.html
226•nonprofiteer•1d ago•354 comments

Show HN: OCR Arena – A playground for OCR models

https://www.ocrarena.ai/battle
169•kbyatnal•3d ago•53 comments

Chrome Jpegxl Issue Reopened

https://issues.chromium.org/issues/40168998
272•markdog12•1d ago•122 comments

Google's new 'Aluminium OS' project brings Android to PC

https://www.androidauthority.com/aluminium-os-android-for-pcs-3619092/
147•jmsflknr•19h ago•207 comments

Claude Opus 4.5

https://www.anthropic.com/news/claude-opus-4-5
1009•adocomplete•19h ago•466 comments

Shai-Hulud Returns: Over 300 NPM Packages Infected

https://helixguard.ai/blog/malicious-sha1hulud-2025-11-24
965•mrdosija•1d ago•731 comments

How did the Win 95 user interface code get brought to the Windows NT code base?

https://devblogs.microsoft.com/oldnewthing/20251028-00/?p=111733
126•ayi•3d ago•71 comments

The Bitter Lesson of LLM Extensions

https://www.sawyerhood.com/blog/llm-extension
129•sawyerjhood•19h ago•68 comments

Building the largest known Kubernetes cluster

https://cloud.google.com/blog/products/containers-kubernetes/how-we-built-a-130000-node-gke-cluster/
142•TangerineDream•3d ago•80 comments

Fifty Shades of OOP

https://lesleylai.info/en/fifty_shades_of_oop/
122•todsacerdoti•1d ago•74 comments

Inside Rust's std and parking_lot mutexes – who wins?

https://blog.cuongle.dev/p/inside-rusts-std-and-parking-lot-mutexes-who-win
186•signa11•5d ago•81 comments

Brain has five 'eras' – with adult mode not starting until early 30s

https://www.theguardian.com/science/2025/nov/25/brain-human-cognitive-development-life-stages-cam...
4•hackernj•42m ago•1 comments

Using Antigravity for Statistical Physics in JavaScript

https://christopherkrapu.com/blog/2025/antigravity-stat-mech/
35•ckrapu•3d ago•24 comments

Three Years from GPT-3 to Gemini 3

https://www.oneusefulthing.org/p/three-years-from-gpt-3-to-gemini
302•JumpCrisscross•2d ago•230 comments
Open in hackernews

Production tests: a guidebook for better systems and more sleep

https://martincapodici.com/2025/05/13/production-tests-a-guidebook-for-better-systems-and-more-sleep/
78•mcapodici•6mo ago

Comments

ashishb•6mo ago
Here's a general rule that I follow along with this and that is "write tests along the axis of minimum change"[1]. Such tests are more valuable and require less maintenance over time.

1 - https://ashishb.net/programming/bad-and-good-ways-to-write-a...

compumike•6mo ago
I'd add that, in terms of tactical implementation, production tests can be implemented at least two different ways:

(1) You set up an outside service to send an HTTP response (or run a headless browser session) every minute, and your endpoint runs some internal assertions that everything looks good, and returns 200 on success.

(2) You set up a scheduled job to run every minute internal to your service. This job does some internal assertions that everything looks good, and sends a heartbeat to an outside service on success.

For #2: most apps of any complexity will already have some system for background and scheduled jobs, so #2 can make a lot of sense. It can also serve as a production assertion that your background job system (Sidekiq, Celery, Resque, crond, systemd, etc) is healthy and running! But it doesn't test the HTTP side of your stack at all.

For #1: it has the advantage that you also get to assert that all the layers between your user and your application are up and running: DNS, load balancers, SSL certificates, etc. But this means that on failure, it may be less immediately clear whether the failure is internal to your application, or somewhere else in the stack.

My personal take has been to lean toward #2 more heavily (lots of individual check jobs that run once per minute inside Sidekiq, and then check-in on success), but with a little bit of #1 sprinkled in as well (some lightweight health-check endpoints, others that do more intense checks on various parts of the system, a few that monitor various redirects like www->root domain or http->https). And for our team we implement both #1 and #2 with Heii On-Call https://heiioncall.com/ : for #2, sending heartbeats from the cron-style check jobs to the "Inbound Liveness" triggers, and for #1, implementing a bunch of "Outbound Probe" HTTP uptime checks with various assertions on the response headers etc.

And this production monitoring is all in addition to a ton of rspec and capybara tests that run in CI before a build gets deployed. In terms of effort or lines of code, it's probably:

    90% rspec and capybara tests that run on CI (not production tests)
    9% various SystemXyzCheckJob tests that run every minute in production and send a heartbeat
    1% various health check endpoints with different assertions that are hit externally in production
And absolutely agree about requiring multiple consecutive failures before an alarm! Whenever I'm woken up by a false positive, my default timeout (i.e. # of consecutive failures required) gets a little bit higher :)
hugs•6mo ago
yeah, full end-to-end tests/monitors are like fire alarms: they can often tell you something is wrong, but not exactly what is wrong. but that doesn't mean fire alarms have no value. most common failure mode for teams are having too many or none at all. but having a few in a few key places is the way to go.
mhw•6mo ago
The fabulous blazer gem includes a feature for #2: https://github.com/ankane/blazer?tab=readme-ov-file#checks - it’s limited to checks that can be expressed as SQL queries, but that can get you quite a way
aleksiy123•6mo ago
At Google we call these probers.

Does anyone know of any tools/saas that do this.

Was thinking it may be a good potential product.

Especially if it was super easy to generate/spin up for side projects.

hugs•6mo ago
"testing in production" can be controversial, but this is a well-balanced take on it.

lately i've been working on a decentralized production testing network called 'valet network' [1] (full-disclosure: selenium creator here)

i suspect production tests are the killer app for this kind of network: test any site on a real device from anywhere on idle devices that more closely match real world conditions, but as mentioned in the article, it's not that simple. dev users will still need to be smart about creating test data and filtering out the tests from system logs. i'm still in the "is this something people want?" learning phase, even though this is definitely something i want and wish i had when i was helping to fix healthcare.gov back in 2013/2014.

[1]: https://gist.github.com/hugs/7ba46b32d3a21945e08e78510224610...

vasusen•6mo ago
Thank you for the balanced take on an extremely spicy topic.

At WePay (YC S09) we debated this extensively and came up with a similar middle of the way solution. Making sure that a credit card can get tokenized is the critical flow and should run every minute. We ended up with about 4-5 very quick production tests. They helped with debugging as well as alerting.

I am now building a full, automated testing solution at Donobu (https://www.donobu.com), and production tests definitely come up as their own subcategory of e2e tests. I am going to use your guidelines to refine our prompt and bound our production test generator.

testthetest•6mo ago
> Running a test every minute, or 1440 times a day, will show up quite a lot in logs, metrics, and traces.

...not to mention that automated tests are by definition bot traffic, and websites do/should have protections against spam. Cloudflare or AWS WAF tends to filter out some of our AWS DeviceFarm tests, and running automated tests directly from EC2 instances is pretty much guaranteed to be caught by Captcha. Which is not a complaint: this is literally what they were designed to do.

A way to mitigate this issue is to implement "test-only" user agents or tokens to make sure that synthetic requests are distinguishable from real ones, but that means that our code does something in testing that it doesn't do in "real life". (The full Volkswagen effect.)

burnt-resistor•6mo ago
Also known as deep monitoring: checking that functionality is available and working correctly.