(1) You set up an outside service to send an HTTP response (or run a headless browser session) every minute, and your endpoint runs some internal assertions that everything looks good, and returns 200 on success.
(2) You set up a scheduled job to run every minute internal to your service. This job does some internal assertions that everything looks good, and sends a heartbeat to an outside service on success.
For #2: most apps of any complexity will already have some system for background and scheduled jobs, so #2 can make a lot of sense. It can also serve as a production assertion that your background job system (Sidekiq, Celery, Resque, crond, systemd, etc) is healthy and running! But it doesn't test the HTTP side of your stack at all.
For #1: it has the advantage that you also get to assert that all the layers between your user and your application are up and running: DNS, load balancers, SSL certificates, etc. But this means that on failure, it may be less immediately clear whether the failure is internal to your application, or somewhere else in the stack.
My personal take has been to lean toward #2 more heavily (lots of individual check jobs that run once per minute inside Sidekiq, and then check-in on success), but with a little bit of #1 sprinkled in as well (some lightweight health-check endpoints, others that do more intense checks on various parts of the system, a few that monitor various redirects like www->root domain or http->https). And for our team we implement both #1 and #2 with Heii On-Call https://heiioncall.com/ : for #2, sending heartbeats from the cron-style check jobs to the "Inbound Liveness" triggers, and for #1, implementing a bunch of "Outbound Probe" HTTP uptime checks with various assertions on the response headers etc.
And this production monitoring is all in addition to a ton of rspec and capybara tests that run in CI before a build gets deployed. In terms of effort or lines of code, it's probably:
90% rspec and capybara tests that run on CI (not production tests)
9% various SystemXyzCheckJob tests that run every minute in production and send a heartbeat
1% various health check endpoints with different assertions that are hit externally in production
And absolutely agree about requiring multiple consecutive failures before an alarm! Whenever I'm woken up by a false positive, my default timeout (i.e. # of consecutive failures required) gets a little bit higher :)lately i've been working on a decentralized production testing network called 'valet network' [1] (full-disclosure: selenium creator here)
i suspect production tests are the killer app for this kind of network: test any site on a real device from anywhere on idle devices that more closely match real world conditions, but as mentioned in the article, it's not that simple. dev users will still need to be smart about creating test data and filtering out the tests from system logs. i'm still in the "is this something people want?" learning phase, even though this is definitely something i want and wish i had when i was helping to fix healthcare.gov back in 2013/2014.
[1]: https://gist.github.com/hugs/7ba46b32d3a21945e08e78510224610...
At WePay (YC S09) we debated this extensively and came up with a similar middle of the way solution. Making sure that a credit card can get tokenized is the critical flow and should run every minute. We ended up with about 4-5 very quick production tests. They helped with debugging as well as alerting.
I am now building a full, automated testing solution at Donobu (https://www.donobu.com), and production tests definitely come up as their own subcategory of e2e tests. I am going to use your guidelines to refine our prompt and bound our production test generator.
ashishb•3h ago
1 - https://ashishb.net/programming/bad-and-good-ways-to-write-a...