frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

SpaceX's next astronaut launch for NASA is officially on for Feb. 11 as FAA clea

https://www.space.com/space-exploration/launches-spacecraft/spacexs-next-astronaut-launch-for-nas...
1•bookmtn•45s ago•0 comments

Show HN: One-click AI employee with its own cloud desktop

https://cloudbot-ai.com
1•fainir•3m ago•0 comments

Show HN: Poddley – Search podcasts by who's speaking

https://poddley.com
1•onesandofgrain•3m ago•0 comments

Same Surface, Different Weight

https://www.robpanico.com/articles/display/?entry_short=same-surface-different-weight
1•retrocog•6m ago•0 comments

The Rise of Spec Driven Development

https://www.dbreunig.com/2026/02/06/the-rise-of-spec-driven-development.html
2•Brajeshwar•10m ago•0 comments

The first good Raspberry Pi Laptop

https://www.jeffgeerling.com/blog/2026/the-first-good-raspberry-pi-laptop/
3•Brajeshwar•10m ago•0 comments

Seas to Rise Around the World – But Not in Greenland

https://e360.yale.edu/digest/greenland-sea-levels-fall
1•Brajeshwar•10m ago•0 comments

Will Future Generations Think We're Gross?

https://chillphysicsenjoyer.substack.com/p/will-future-generations-think-were
1•crescit_eundo•13m ago•0 comments

State Department will delete Xitter posts from before Trump returned to office

https://www.npr.org/2026/02/07/nx-s1-5704785/state-department-trump-posts-x
2•righthand•16m ago•1 comments

Show HN: Verifiable server roundtrip demo for a decision interruption system

https://github.com/veeduzyl-hue/decision-assistant-roundtrip-demo
1•veeduzyl•17m ago•0 comments

Impl Rust – Avro IDL Tool in Rust via Antlr

https://www.youtube.com/watch?v=vmKvw73V394
1•todsacerdoti•18m ago•0 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
2•vinhnx•18m ago•0 comments

minikeyvalue

https://github.com/commaai/minikeyvalue/tree/prod
3•tosh•23m ago•0 comments

Neomacs: GPU-accelerated Emacs with inline video, WebKit, and terminal via wgpu

https://github.com/eval-exec/neomacs
1•evalexec•28m ago•0 comments

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
2•ShinyaKoyano•32m ago•1 comments

How I grow my X presence?

https://www.reddit.com/r/GrowthHacking/s/UEc8pAl61b
2•m00dy•33m ago•0 comments

What's the cost of the most expensive Super Bowl ad slot?

https://ballparkguess.com/?id=5b98b1d3-5887-47b9-8a92-43be2ced674b
1•bkls•34m ago•0 comments

What if you just did a startup instead?

https://alexaraki.substack.com/p/what-if-you-just-did-a-startup
5•okaywriting•41m ago•0 comments

Hacking up your own shell completion (2020)

https://www.feltrac.co/environment/2020/01/18/build-your-own-shell-completion.html
2•todsacerdoti•44m ago•0 comments

Show HN: Gorse 0.5 – Open-source recommender system with visual workflow editor

https://github.com/gorse-io/gorse
1•zhenghaoz•44m ago•0 comments

GLM-OCR: Accurate × Fast × Comprehensive

https://github.com/zai-org/GLM-OCR
1•ms7892•45m ago•0 comments

Local Agent Bench: Test 11 small LLMs on tool-calling judgment, on CPU, no GPU

https://github.com/MikeVeerman/tool-calling-benchmark
1•MikeVeerman•46m ago•0 comments

Show HN: AboutMyProject – A public log for developer proof-of-work

https://aboutmyproject.com/
1•Raiplus•46m ago•0 comments

Expertise, AI and Work of Future [video]

https://www.youtube.com/watch?v=wsxWl9iT1XU
1•indiantinker•47m ago•0 comments

So Long to Cheap Books You Could Fit in Your Pocket

https://www.nytimes.com/2026/02/06/books/mass-market-paperback-books.html
4•pseudolus•47m ago•2 comments

PID Controller

https://en.wikipedia.org/wiki/Proportional%E2%80%93integral%E2%80%93derivative_controller
1•tosh•51m ago•0 comments

SpaceX Rocket Generates 100GW of Power, or 20% of US Electricity

https://twitter.com/AlecStapp/status/2019932764515234159
2•bkls•51m ago•0 comments

Kubernetes MCP Server

https://github.com/yindia/rootcause
1•yindia•53m ago•0 comments

I Built a Movie Recommendation Agent to Solve Movie Nights with My Wife

https://rokn.io/posts/building-movie-recommendation-agent
4•roknovosel•53m ago•0 comments

What were the first animals? The fierce sponge–jelly battle that just won't end

https://www.nature.com/articles/d41586-026-00238-z
2•beardyw•1h ago•0 comments
Open in hackernews

Production tests: a guidebook for better systems and more sleep

https://martincapodici.com/2025/05/13/production-tests-a-guidebook-for-better-systems-and-more-sleep/
78•mcapodici•8mo ago

Comments

ashishb•8mo ago
Here's a general rule that I follow along with this and that is "write tests along the axis of minimum change"[1]. Such tests are more valuable and require less maintenance over time.

1 - https://ashishb.net/programming/bad-and-good-ways-to-write-a...

compumike•8mo ago
I'd add that, in terms of tactical implementation, production tests can be implemented at least two different ways:

(1) You set up an outside service to send an HTTP response (or run a headless browser session) every minute, and your endpoint runs some internal assertions that everything looks good, and returns 200 on success.

(2) You set up a scheduled job to run every minute internal to your service. This job does some internal assertions that everything looks good, and sends a heartbeat to an outside service on success.

For #2: most apps of any complexity will already have some system for background and scheduled jobs, so #2 can make a lot of sense. It can also serve as a production assertion that your background job system (Sidekiq, Celery, Resque, crond, systemd, etc) is healthy and running! But it doesn't test the HTTP side of your stack at all.

For #1: it has the advantage that you also get to assert that all the layers between your user and your application are up and running: DNS, load balancers, SSL certificates, etc. But this means that on failure, it may be less immediately clear whether the failure is internal to your application, or somewhere else in the stack.

My personal take has been to lean toward #2 more heavily (lots of individual check jobs that run once per minute inside Sidekiq, and then check-in on success), but with a little bit of #1 sprinkled in as well (some lightweight health-check endpoints, others that do more intense checks on various parts of the system, a few that monitor various redirects like www->root domain or http->https). And for our team we implement both #1 and #2 with Heii On-Call https://heiioncall.com/ : for #2, sending heartbeats from the cron-style check jobs to the "Inbound Liveness" triggers, and for #1, implementing a bunch of "Outbound Probe" HTTP uptime checks with various assertions on the response headers etc.

And this production monitoring is all in addition to a ton of rspec and capybara tests that run in CI before a build gets deployed. In terms of effort or lines of code, it's probably:

    90% rspec and capybara tests that run on CI (not production tests)
    9% various SystemXyzCheckJob tests that run every minute in production and send a heartbeat
    1% various health check endpoints with different assertions that are hit externally in production
And absolutely agree about requiring multiple consecutive failures before an alarm! Whenever I'm woken up by a false positive, my default timeout (i.e. # of consecutive failures required) gets a little bit higher :)
hugs•8mo ago
yeah, full end-to-end tests/monitors are like fire alarms: they can often tell you something is wrong, but not exactly what is wrong. but that doesn't mean fire alarms have no value. most common failure mode for teams are having too many or none at all. but having a few in a few key places is the way to go.
mhw•8mo ago
The fabulous blazer gem includes a feature for #2: https://github.com/ankane/blazer?tab=readme-ov-file#checks - it’s limited to checks that can be expressed as SQL queries, but that can get you quite a way
aleksiy123•8mo ago
At Google we call these probers.

Does anyone know of any tools/saas that do this.

Was thinking it may be a good potential product.

Especially if it was super easy to generate/spin up for side projects.

hugs•8mo ago
"testing in production" can be controversial, but this is a well-balanced take on it.

lately i've been working on a decentralized production testing network called 'valet network' [1] (full-disclosure: selenium creator here)

i suspect production tests are the killer app for this kind of network: test any site on a real device from anywhere on idle devices that more closely match real world conditions, but as mentioned in the article, it's not that simple. dev users will still need to be smart about creating test data and filtering out the tests from system logs. i'm still in the "is this something people want?" learning phase, even though this is definitely something i want and wish i had when i was helping to fix healthcare.gov back in 2013/2014.

[1]: https://gist.github.com/hugs/7ba46b32d3a21945e08e78510224610...

vasusen•8mo ago
Thank you for the balanced take on an extremely spicy topic.

At WePay (YC S09) we debated this extensively and came up with a similar middle of the way solution. Making sure that a credit card can get tokenized is the critical flow and should run every minute. We ended up with about 4-5 very quick production tests. They helped with debugging as well as alerting.

I am now building a full, automated testing solution at Donobu (https://www.donobu.com), and production tests definitely come up as their own subcategory of e2e tests. I am going to use your guidelines to refine our prompt and bound our production test generator.

testthetest•8mo ago
> Running a test every minute, or 1440 times a day, will show up quite a lot in logs, metrics, and traces.

...not to mention that automated tests are by definition bot traffic, and websites do/should have protections against spam. Cloudflare or AWS WAF tends to filter out some of our AWS DeviceFarm tests, and running automated tests directly from EC2 instances is pretty much guaranteed to be caught by Captcha. Which is not a complaint: this is literally what they were designed to do.

A way to mitigate this issue is to implement "test-only" user agents or tokens to make sure that synthetic requests are distinguishable from real ones, but that means that our code does something in testing that it doesn't do in "real life". (The full Volkswagen effect.)

burnt-resistor•8mo ago
Also known as deep monitoring: checking that functionality is available and working correctly.