frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

LLM from scratch, part 28 – training a base model from scratch on an RTX 3090

https://www.gilesthomas.com/2025/12/llm-from-scratch-28-training-a-base-model-from-scratch
91•gpjt•6d ago•10 comments

The Joy of Playing Grandia, on Sega Saturn

https://www.segasaturnshiro.com/2025/11/27/the-joy-of-playing-grandia-on-sega-saturn/
58•tosh•2h ago•14 comments

Show HN: AlgoDrill – Interactive drills to stop forgetting LeetCode patterns

https://algodrill.io
19•henwfan•1h ago•3 comments

No ARIA is better than bad ARIA

https://www.w3.org/WAI/ARIA/apg/practices/read-me-first/
74•robin_reala•6d ago•35 comments

A deep dive into QEMU: The Tiny Code Generator (TCG), part 1

https://airbus-seclab.github.io/qemu_blog/tcg_p1.html
14•costco•6d ago•1 comments

Epsilon: A WASM virtual machine written in Go

https://github.com/ziggy42/epsilon
61•ziggy42•1w ago•16 comments

Icons in Menus Everywhere – Send Help

https://blog.jim-nielsen.com/2025/icons-in-menus/
593•ArmageddonIt•16h ago•246 comments

The universal weight subspace hypothesis

https://arxiv.org/abs/2512.05117
297•lukeplato•11h ago•104 comments

ZX Spectrum Next on the Internet: Xberry Pi ESP01 and Pi Zero Upgrades

https://retrogamecoders.com/zx-spectrum-next-on-the-internet-xberry-pi-esp01-and-pi-zero-upgrades/
4•ibobev•1h ago•0 comments

Kroger acknowledges that its bet on robotics went too far

https://www.grocerydive.com/news/kroger-ocado-close-automated-fulfillment-centers-robotics-grocer...
185•JumpCrisscross•12h ago•165 comments

Brent's Encapsulated C Programming Rules (2020)

https://retroscience.net/brents-c-programming-rules.html
3•p2detar•56m ago•1 comments

Manual: Spaces

https://type.today/en/journal/spaces
71•doener•12h ago•7 comments

Jepsen: NATS 2.12.1

https://jepsen.io/analyses/nats-2.12.1
376•aphyr•17h ago•138 comments

Strong earthquake hits northern Japan, tsunami warning issued

https://www3.nhk.or.jp/nhkworld/en/news/20251209_02/
324•lattis•21h ago•149 comments

Microsoft increases Office 365 and Microsoft 365 license prices

https://office365itpros.com/2025/12/08/microsoft-365-pricing-increase/
399•taubek•22h ago•465 comments

AMD GPU Debugger

https://thegeeko.me/blog/amd-gpu-debugging/
256•ibobev•20h ago•46 comments

Has the cost of building software dropped 90%?

https://martinalderson.com/posts/has-the-cost-of-software-just-dropped-90-percent/
304•martinald•17h ago•445 comments

Launch HN: Nia (YC S25) – Give better context to coding agents

https://www.trynia.ai/
118•jellyotsiro•19h ago•75 comments

Let's put Tailscale on a jailbroken Kindle

https://tailscale.com/blog/tailscale-jailbroken-kindle
292•Quizzical4230•19h ago•69 comments

Horses: AI progress is steady. Human equivalence is sudden

https://andyljones.com/posts/horses.html
438•pbui•11h ago•346 comments

A thousand-year-long composition turns 25 (2024)

https://longplayer.org/news/2024/12/31/a-thousand-year-long-composition-turns-25/
26•1659447091•5h ago•5 comments

Trials avoid high risk patients and underestimate drug harms

https://www.nber.org/papers/w34534
132•bikenaga•17h ago•40 comments

IBM to acquire Confluent

https://www.confluent.io/blog/ibm-to-acquire-confluent/
407•abd12•22h ago•325 comments

Paramount launches hostile bid for Warner Bros

https://www.cnbc.com/2025/12/08/paramount-skydance-hostile-bid-wbd-netflix.html
332•gniting•21h ago•344 comments

The Lost Machine Automats and Self-Service Cafeterias of NYC (2023)

https://www.untappedcities.com/automats-cafeterias-nyc/
79•walterbell•11h ago•24 comments

Hunting for North Korean Fiber Optic Cables

https://nkinternet.com/2025/12/08/hunting-for-north-korean-fiber-optic-cables/
260•Bezod•19h ago•96 comments

Periodic Spaces

https://ianthehenry.com/posts/periodic-spaces/
26•surprisetalk•5d ago•8 comments

Cassette tapes are making a comeback?

https://theconversation.com/cassette-tapes-are-making-a-comeback-yes-really-268108
106•devonnull•5d ago•178 comments

AI should only run as fast as we can catch up

https://higashi.blog/2025/12/07/ai-verification/
175•yuedongze•18h ago•152 comments

Show HN: Fanfa – Interactive and animated Mermaid diagrams

https://fanfa.dev/
119•bairess•4d ago•26 comments
Open in hackernews

Production tests: a guidebook for better systems and more sleep

https://martincapodici.com/2025/05/13/production-tests-a-guidebook-for-better-systems-and-more-sleep/
78•mcapodici•6mo ago

Comments

ashishb•6mo ago
Here's a general rule that I follow along with this and that is "write tests along the axis of minimum change"[1]. Such tests are more valuable and require less maintenance over time.

1 - https://ashishb.net/programming/bad-and-good-ways-to-write-a...

compumike•6mo ago
I'd add that, in terms of tactical implementation, production tests can be implemented at least two different ways:

(1) You set up an outside service to send an HTTP response (or run a headless browser session) every minute, and your endpoint runs some internal assertions that everything looks good, and returns 200 on success.

(2) You set up a scheduled job to run every minute internal to your service. This job does some internal assertions that everything looks good, and sends a heartbeat to an outside service on success.

For #2: most apps of any complexity will already have some system for background and scheduled jobs, so #2 can make a lot of sense. It can also serve as a production assertion that your background job system (Sidekiq, Celery, Resque, crond, systemd, etc) is healthy and running! But it doesn't test the HTTP side of your stack at all.

For #1: it has the advantage that you also get to assert that all the layers between your user and your application are up and running: DNS, load balancers, SSL certificates, etc. But this means that on failure, it may be less immediately clear whether the failure is internal to your application, or somewhere else in the stack.

My personal take has been to lean toward #2 more heavily (lots of individual check jobs that run once per minute inside Sidekiq, and then check-in on success), but with a little bit of #1 sprinkled in as well (some lightweight health-check endpoints, others that do more intense checks on various parts of the system, a few that monitor various redirects like www->root domain or http->https). And for our team we implement both #1 and #2 with Heii On-Call https://heiioncall.com/ : for #2, sending heartbeats from the cron-style check jobs to the "Inbound Liveness" triggers, and for #1, implementing a bunch of "Outbound Probe" HTTP uptime checks with various assertions on the response headers etc.

And this production monitoring is all in addition to a ton of rspec and capybara tests that run in CI before a build gets deployed. In terms of effort or lines of code, it's probably:

    90% rspec and capybara tests that run on CI (not production tests)
    9% various SystemXyzCheckJob tests that run every minute in production and send a heartbeat
    1% various health check endpoints with different assertions that are hit externally in production
And absolutely agree about requiring multiple consecutive failures before an alarm! Whenever I'm woken up by a false positive, my default timeout (i.e. # of consecutive failures required) gets a little bit higher :)
hugs•6mo ago
yeah, full end-to-end tests/monitors are like fire alarms: they can often tell you something is wrong, but not exactly what is wrong. but that doesn't mean fire alarms have no value. most common failure mode for teams are having too many or none at all. but having a few in a few key places is the way to go.
mhw•6mo ago
The fabulous blazer gem includes a feature for #2: https://github.com/ankane/blazer?tab=readme-ov-file#checks - it’s limited to checks that can be expressed as SQL queries, but that can get you quite a way
aleksiy123•6mo ago
At Google we call these probers.

Does anyone know of any tools/saas that do this.

Was thinking it may be a good potential product.

Especially if it was super easy to generate/spin up for side projects.

hugs•6mo ago
"testing in production" can be controversial, but this is a well-balanced take on it.

lately i've been working on a decentralized production testing network called 'valet network' [1] (full-disclosure: selenium creator here)

i suspect production tests are the killer app for this kind of network: test any site on a real device from anywhere on idle devices that more closely match real world conditions, but as mentioned in the article, it's not that simple. dev users will still need to be smart about creating test data and filtering out the tests from system logs. i'm still in the "is this something people want?" learning phase, even though this is definitely something i want and wish i had when i was helping to fix healthcare.gov back in 2013/2014.

[1]: https://gist.github.com/hugs/7ba46b32d3a21945e08e78510224610...

vasusen•6mo ago
Thank you for the balanced take on an extremely spicy topic.

At WePay (YC S09) we debated this extensively and came up with a similar middle of the way solution. Making sure that a credit card can get tokenized is the critical flow and should run every minute. We ended up with about 4-5 very quick production tests. They helped with debugging as well as alerting.

I am now building a full, automated testing solution at Donobu (https://www.donobu.com), and production tests definitely come up as their own subcategory of e2e tests. I am going to use your guidelines to refine our prompt and bound our production test generator.

testthetest•6mo ago
> Running a test every minute, or 1440 times a day, will show up quite a lot in logs, metrics, and traces.

...not to mention that automated tests are by definition bot traffic, and websites do/should have protections against spam. Cloudflare or AWS WAF tends to filter out some of our AWS DeviceFarm tests, and running automated tests directly from EC2 instances is pretty much guaranteed to be caught by Captcha. Which is not a complaint: this is literally what they were designed to do.

A way to mitigate this issue is to implement "test-only" user agents or tokens to make sure that synthetic requests are distinguishable from real ones, but that means that our code does something in testing that it doesn't do in "real life". (The full Volkswagen effect.)

burnt-resistor•6mo ago
Also known as deep monitoring: checking that functionality is available and working correctly.