frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

You Are Here

https://brooker.co.za/blog/2026/02/07/you-are-here.html
1•mltvc•2m ago•0 comments

Why social apps need to become proactive, not reactive

https://www.heyflare.app/blog/from-reactive-to-proactive-how-ai-agents-will-reshape-social-apps
1•JoanMDuarte•2m ago•0 comments

How patient are AI scrapers, anyway? – Random Thoughts

https://lars.ingebrigtsen.no/2026/02/07/how-patient-are-ai-scrapers-anyway/
1•samtrack2019•3m ago•0 comments

Vouch: A contributor trust management system

https://github.com/mitchellh/vouch
1•SchwKatze•3m ago•0 comments

I built a terminal monitoring app and custom firmware for a clock with Claude

https://duggan.ie/posts/i-built-a-terminal-monitoring-app-and-custom-firmware-for-a-desktop-clock...
1•duggan•4m ago•0 comments

Tiny C Compiler

https://bellard.org/tcc/
1•guerrilla•5m ago•0 comments

Y Combinator Founder Organizes 'March for Billionaires'

https://mlq.ai/news/ai-startup-founder-organizes-march-for-billionaires-protest-against-californi...
1•hidden80•6m ago•1 comments

Ask HN: Need feedback on the idea I'm working on

1•Yogender78•6m ago•0 comments

OpenClaw Addresses Security Risks

https://thebiggish.com/news/openclaw-s-security-flaws-expose-enterprise-risk-22-of-deployments-un...
1•vedantnair•7m ago•0 comments

Apple finalizes Gemini / Siri deal

https://www.engadget.com/ai/apple-reportedly-plans-to-reveal-its-gemini-powered-siri-in-february-...
1•vedantnair•7m ago•0 comments

Italy Railways Sabotaged

https://www.bbc.co.uk/news/articles/czr4rx04xjpo
2•vedantnair•7m ago•0 comments

Emacs-tramp-RPC: high-performance TRAMP back end using MsgPack-RPC

https://github.com/ArthurHeymans/emacs-tramp-rpc
1•fanf2•9m ago•0 comments

Nintendo Wii Themed Portfolio

https://akiraux.vercel.app/
1•s4074433•13m ago•1 comments

"There must be something like the opposite of suicide "

https://post.substack.com/p/there-must-be-something-like-the
1•rbanffy•15m ago•0 comments

Ask HN: Why doesn't Netflix add a “Theater Mode” that recreates the worst parts?

2•amichail•16m ago•0 comments

Show HN: Engineering Perception with Combinatorial Memetics

1•alan_sass•22m ago•2 comments

Show HN: Steam Daily – A Wordle-like daily puzzle game for Steam fans

https://steamdaily.xyz
1•itshellboy•24m ago•0 comments

The Anthropic Hive Mind

https://steve-yegge.medium.com/the-anthropic-hive-mind-d01f768f3d7b
1•spenvo•24m ago•0 comments

Just Started Using AmpCode

https://intelligenttools.co/blog/ampcode-multi-agent-production
1•BojanTomic•26m ago•0 comments

LLM as an Engineer vs. a Founder?

1•dm03514•26m ago•0 comments

Crosstalk inside cells helps pathogens evade drugs, study finds

https://phys.org/news/2026-01-crosstalk-cells-pathogens-evade-drugs.html
2•PaulHoule•28m ago•0 comments

Show HN: Design system generator (mood to CSS in <1 second)

https://huesly.app
1•egeuysall•28m ago•1 comments

Show HN: 26/02/26 – 5 songs in a day

https://playingwith.variousbits.net/saturday
1•dmje•29m ago•0 comments

Toroidal Logit Bias – Reduce LLM hallucinations 40% with no fine-tuning

https://github.com/Paraxiom/topological-coherence
1•slye514•31m ago•1 comments

Top AI models fail at >96% of tasks

https://www.zdnet.com/article/ai-failed-test-on-remote-freelance-jobs/
5•codexon•31m ago•2 comments

The Science of the Perfect Second (2023)

https://harpers.org/archive/2023/04/the-science-of-the-perfect-second/
1•NaOH•32m ago•0 comments

Bob Beck (OpenBSD) on why vi should stay vi (2006)

https://marc.info/?l=openbsd-misc&m=115820462402673&w=2
2•birdculture•36m ago•0 comments

Show HN: a glimpse into the future of eye tracking for multi-agent use

https://github.com/dchrty/glimpsh
1•dochrty•36m ago•0 comments

The Optima-l Situation: A deep dive into the classic humanist sans-serif

https://micahblachman.beehiiv.com/p/the-optima-l-situation
2•subdomain•37m ago•1 comments

Barn Owls Know When to Wait

https://blog.typeobject.com/posts/2026-barn-owls-know-when-to-wait/
1•fintler•37m ago•0 comments
Open in hackernews

Architecture+cost drivers for a deterministic rule/metric engine 1,200metrics

2•Trackdiver•3w ago
I’m designing a large-scale deterministic analytics engine and would appreciate architectural + cost/effort advice from people who’ve built similar systems.

The core challenge: • ~1,200 domain-specific metrics • All rule-based (no ML), fully deterministic • Metrics share common primitives but differ in configuration • Metrics combine into composite indices • Outputs must be auditable and reproducible (same inputs → same outputs) • I want metrics definable declaratively (not hard-coded one by one)

The system ingests structured event data, computes per-entity metrics, and produces ranked outputs with full breakdowns.

I’m specifically looking for guidance on: • Architectures for large configurable rule/metric engines • How to represent metric definitions (DSL vs JSON/YAML vs expression trees) • Managing performance without sacrificing transparency • Avoiding “1,200 custom functions” antipatterns • What you’d do differently if starting this today

Cost / effort sanity check (important): If you were scoping this as a solo engineer or small team, what are the biggest cost drivers and realistic milestones? • What should “Phase 1” include to validate the engine (e.g., primitives + declarative metric format + compute pipeline + 100–200 metrics)? • What’s a realistic engineering effort range for Phase 1 vs “all 1,200” (weeks/months, 1–2 devs vs 3–5 devs)? • Any common traps that make cost explode (data modeling mistakes, premature UI, overengineering the DSL, etc.)?

I’m not looking to hire here — just trying to sanity-check design decisions and expected effort before implementation.

Thanks in advance for any insight.

Comments

crosslayer•3w ago
A pattern I’ve seen bite systems like this isn’t compute or storage first… it’s semantic drift in metric definitions over time.

When you have ~1,200 deterministic metrics sharing primitives, the real cost driver becomes definition coupling, not execution. If metrics are “configurable” but allowed to encode control flow, branching semantics, or hidden normalization rules, you end up with 1,200 soft-coded functions anyway… just harder to reason about.

One approach that’s worked well for me is to explicitly separate:

• Primitive signals (pure, immutable, versioned) • Metric transforms (strictly functional, no side effects, no cross-metric reads) • Aggregation/composition layers (where ranking and composite indices live)

The key constraint… metric definitions must be referentially transparent and evaluable in isolation. If a metric can’t be recomputed offline from recorded inputs and its definition hash, it’s already too powerful.

On representation… I’ve had better outcomes with a constrained expression tree (or typed DSL) than raw JSON/YAML. The goal isn’t flexibility… it’s preventing the system from becoming a general purpose programming environment.

For Phase 1, I’d strongly cap scope at:

• A small, fixed primitive vocabulary • 100–200 metrics max • Explicit versioning + replay tooling • Hard limits on metric execution cost

The biggest cost explosions I’ve seen come from

• Allowing metrics to depend on other metrics implicitly • Letting “configuration” evolve without versioned invariants • Optimizing performance before semantic boundaries are locked

Curious whether you’re thinking about definition immutability and replayability as first class constraints, or treating them as implementation details later.