frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

DaVinci Resolve releases Photo Editor

https://www.blackmagicdesign.com/products/davinciresolve/photo
118•thebiblelover7•2h ago•28 comments

A new spam policy for "back button hijacking"

https://developers.google.com/search/blog/2026/04/back-button-hijacking
60•zdw•1h ago•21 comments

Someone bought 30 WordPress plugins and planted a backdoor in all of them

https://anchor.host/someone-bought-30-wordpress-plugins-and-planted-a-backdoor-in-all-of-them/
785•speckx•10h ago•224 comments

GitHub Stacked PRs

https://github.github.com/gh-stack/
539•ezekg•7h ago•290 comments

Lean proved this program correct; then I found a bug

https://kirancodes.me/posts/log-who-watches-the-watchers.html
148•bumbledraven•4h ago•82 comments

WiiFin – Jellyfin Client for Nintendo Wii

https://github.com/fabienmillet/WiiFin
100•throwawayk7h•4h ago•39 comments

Design and implementation of DuckDB internals

https://duckdb.org/library/design-and-implementation-of-duckdb-internals/
59•mpweiher•3d ago•5 comments

Nothing Ever Happens: Polymarket bot that always buys No on non-sports markets

https://github.com/sterlingcrispin/nothing-ever-happens
374•m-hodges•12h ago•198 comments

Rust Threads on the GPU

https://www.vectorware.com/blog/threads-on-gpu/
17•PaulHoule•4d ago•4 comments

US appeals court declares 158-year-old home distilling ban unconstitutional

https://nypost.com/2026/04/11/us-news/us-appeals-court-declares-158-year-old-home-distilling-ban-...
336•t-3•14h ago•245 comments

How to make Firefox builds 17% faster

https://blog.farre.se/posts/2026/04/10/caching-webidl-codegen/
148•mbitsnbites•9h ago•24 comments

Write less code, be more responsible

https://blog.orhun.dev/code-responsibly/
50•orhunp_•2d ago•28 comments

Servo is now available on crates.io

https://servo.org/blog/2026/04/13/servo-0.1.0-release/
441•ffin•16h ago•140 comments

Make tmux pretty and usable (2024)

https://hamvocke.com/blog/a-guide-to-customizing-your-tmux-conf/
337•speckx•13h ago•211 comments

I shipped a transaction bug, so I built a linter

https://leonh.fr/posts/go-transaction-linter/
6•leonhfr•3d ago•0 comments

Building a CLI for all of Cloudflare

https://blog.cloudflare.com/cf-cli-local-explorer/
278•soheilpro•12h ago•90 comments

Air Powered Segment Display? [video]

https://www.youtube.com/watch?v=E1BLGpE5zH0
69•ProfDreamer•2d ago•9 comments

GAIA – Open-source framework for building AI agents that run on local hardware

https://amd-gaia.ai/docs
111•galaxyLogic•9h ago•25 comments

Show HN: Ithihāsas – a character explorer for Hindu epics, built in a few hours

https://www.ithihasas.in
132•cvrajeesh•9h ago•32 comments

The AI revolution in math has arrived

https://www.quantamagazine.org/the-ai-revolution-in-math-has-arrived-20260413/
53•sonabinu•5h ago•26 comments

Android now stops you sharing your location in photos

https://shkspr.mobi/blog/2026/04/android-now-stops-you-sharing-your-location-in-photos/
316•edent•16h ago•282 comments

I just want simple S3

https://blog.feld.me/posts/2026/04/i-just-want-simple-s3/
125•g0xA52A2A•2d ago•72 comments

Tool to explore regularly sampled time series

https://github.com/rajivsam/tseda
8•rsva•3d ago•0 comments

Hacker compromises A16Z-backed phone farm, calling them the 'antichrist'

https://www.404media.co/hacker-compromises-a16z-backed-phone-farm-tries-to-post-memes-calling-a16...
18•wibbily•56m ago•4 comments

What we learned building a Rust runtime for TypeScript

https://encore.dev/blog/rust-runtime
51•vinhnx•2d ago•12 comments

Tracking down a 25% Regression on LLVM RISC-V

https://blog.kaving.me/blog/tracking-down-a-25-regression-on-llvm-risc-v/
104•luu•1d ago•21 comments

N-Day-Bench – Can LLMs find real vulnerabilities in real codebases?

https://ndaybench.winfunc.com
46•mufeedvh•6h ago•11 comments

Visualizing CPU Pipelining (2024)

https://timmastny.com/blog/visualizing-cpu-pipelining/
72•flipacholas•10h ago•9 comments

Why it’s impossible to measure England’s coastline

https://www.bbc.com/travel/article/20260410-why-its-impossible-to-measure-englands-coastline
23•BiraIgnacio•4h ago•20 comments

New Orleans's Car-Crash Conspiracy

https://www.newyorker.com/magazine/2026/04/20/the-car-crash-conspiracy
89•Geekette•10h ago•54 comments
Open in hackernews

Production tests: a guidebook for better systems and more sleep

https://martincapodici.com/2025/05/13/production-tests-a-guidebook-for-better-systems-and-more-sleep/
78•mcapodici•11mo ago

Comments

ashishb•10mo ago
Here's a general rule that I follow along with this and that is "write tests along the axis of minimum change"[1]. Such tests are more valuable and require less maintenance over time.

1 - https://ashishb.net/programming/bad-and-good-ways-to-write-a...

compumike•10mo ago
I'd add that, in terms of tactical implementation, production tests can be implemented at least two different ways:

(1) You set up an outside service to send an HTTP response (or run a headless browser session) every minute, and your endpoint runs some internal assertions that everything looks good, and returns 200 on success.

(2) You set up a scheduled job to run every minute internal to your service. This job does some internal assertions that everything looks good, and sends a heartbeat to an outside service on success.

For #2: most apps of any complexity will already have some system for background and scheduled jobs, so #2 can make a lot of sense. It can also serve as a production assertion that your background job system (Sidekiq, Celery, Resque, crond, systemd, etc) is healthy and running! But it doesn't test the HTTP side of your stack at all.

For #1: it has the advantage that you also get to assert that all the layers between your user and your application are up and running: DNS, load balancers, SSL certificates, etc. But this means that on failure, it may be less immediately clear whether the failure is internal to your application, or somewhere else in the stack.

My personal take has been to lean toward #2 more heavily (lots of individual check jobs that run once per minute inside Sidekiq, and then check-in on success), but with a little bit of #1 sprinkled in as well (some lightweight health-check endpoints, others that do more intense checks on various parts of the system, a few that monitor various redirects like www->root domain or http->https). And for our team we implement both #1 and #2 with Heii On-Call https://heiioncall.com/ : for #2, sending heartbeats from the cron-style check jobs to the "Inbound Liveness" triggers, and for #1, implementing a bunch of "Outbound Probe" HTTP uptime checks with various assertions on the response headers etc.

And this production monitoring is all in addition to a ton of rspec and capybara tests that run in CI before a build gets deployed. In terms of effort or lines of code, it's probably:

    90% rspec and capybara tests that run on CI (not production tests)
    9% various SystemXyzCheckJob tests that run every minute in production and send a heartbeat
    1% various health check endpoints with different assertions that are hit externally in production
And absolutely agree about requiring multiple consecutive failures before an alarm! Whenever I'm woken up by a false positive, my default timeout (i.e. # of consecutive failures required) gets a little bit higher :)
hugs•10mo ago
yeah, full end-to-end tests/monitors are like fire alarms: they can often tell you something is wrong, but not exactly what is wrong. but that doesn't mean fire alarms have no value. most common failure mode for teams are having too many or none at all. but having a few in a few key places is the way to go.
mhw•10mo ago
The fabulous blazer gem includes a feature for #2: https://github.com/ankane/blazer?tab=readme-ov-file#checks - it’s limited to checks that can be expressed as SQL queries, but that can get you quite a way
aleksiy123•10mo ago
At Google we call these probers.

Does anyone know of any tools/saas that do this.

Was thinking it may be a good potential product.

Especially if it was super easy to generate/spin up for side projects.

hugs•10mo ago
"testing in production" can be controversial, but this is a well-balanced take on it.

lately i've been working on a decentralized production testing network called 'valet network' [1] (full-disclosure: selenium creator here)

i suspect production tests are the killer app for this kind of network: test any site on a real device from anywhere on idle devices that more closely match real world conditions, but as mentioned in the article, it's not that simple. dev users will still need to be smart about creating test data and filtering out the tests from system logs. i'm still in the "is this something people want?" learning phase, even though this is definitely something i want and wish i had when i was helping to fix healthcare.gov back in 2013/2014.

[1]: https://gist.github.com/hugs/7ba46b32d3a21945e08e78510224610...

vasusen•10mo ago
Thank you for the balanced take on an extremely spicy topic.

At WePay (YC S09) we debated this extensively and came up with a similar middle of the way solution. Making sure that a credit card can get tokenized is the critical flow and should run every minute. We ended up with about 4-5 very quick production tests. They helped with debugging as well as alerting.

I am now building a full, automated testing solution at Donobu (https://www.donobu.com), and production tests definitely come up as their own subcategory of e2e tests. I am going to use your guidelines to refine our prompt and bound our production test generator.

testthetest•10mo ago
> Running a test every minute, or 1440 times a day, will show up quite a lot in logs, metrics, and traces.

...not to mention that automated tests are by definition bot traffic, and websites do/should have protections against spam. Cloudflare or AWS WAF tends to filter out some of our AWS DeviceFarm tests, and running automated tests directly from EC2 instances is pretty much guaranteed to be caught by Captcha. Which is not a complaint: this is literally what they were designed to do.

A way to mitigate this issue is to implement "test-only" user agents or tokens to make sure that synthetic requests are distinguishable from real ones, but that means that our code does something in testing that it doesn't do in "real life". (The full Volkswagen effect.)

burnt-resistor•10mo ago
Also known as deep monitoring: checking that functionality is available and working correctly.