frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

We Mourn Our Craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
63•ColinWright•57m ago•27 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
18•surprisetalk•1h ago•15 comments

U.S. Jobs Disappear at Fastest January Pace Since Great Recession

https://www.forbes.com/sites/mikestunson/2026/02/05/us-jobs-disappear-at-fastest-january-pace-sin...
96•alephnerd•1h ago•43 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
120•AlexeyBrin•7h ago•22 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
822•klaussilveira•21h ago•248 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
55•vinhnx•4h ago•7 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
53•thelok•3h ago•6 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
102•1vuio0pswjnm7•8h ago•117 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1057•xnx•1d ago•608 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
75•onurkanbkrc•6h ago•5 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
476•theblazehen•2d ago•175 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
202•jesperordrup•11h ago•69 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
545•nar001•5h ago•252 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
213•alainrk•6h ago•331 comments

Selection Rather Than Prediction

https://voratiq.com/blog/selection-rather-than-prediction/
8•languid-photic•3d ago•1 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
34•rbanffy•4d ago•7 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
27•marklit•5d ago•2 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
113•videotopia•4d ago•30 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
73•speckx•4d ago•74 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
68•mellosouls•4h ago•73 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
273•isitcontent•21h ago•37 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
199•limoce•4d ago•111 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
285•dmpetrov•22h ago•153 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
21•sandGorgon•2d ago•11 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
155•matheusalmeida•2d ago•48 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
555•todsacerdoti•1d ago•268 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
424•ostacke•1d ago•110 comments

Ga68, a GNU Algol 68 Compiler

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
42•matt_d•4d ago•18 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
472•lstoll•1d ago•312 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
348•eljojo•1d ago•215 comments
Open in hackernews

Launch HN: Constellation Space (YC W26) – AI for satellite mission assurance

48•kmajid•2w ago
Hi HN! We're Kamran, Raaid, Laith, and Omeed from Constellation Space (https://constellation-io.com/). We built an AI system that predicts satellite link failures before they happen. Here's a video walkthrough: https://www.youtube.com/watch?v=069V9fADAtM.

Between us, we've spent years working on satellite operations at SpaceX, Blue Origin, and NASA. At SpaceX, we managed constellation health for Starlink. At Blue, we worked on next-gen test infra for New Glenn. At NASA, we dealt with deep space communications. The same problem kept coming up: by the time you notice a link is degrading, you've often already lost data.

The core issue is that satellite RF links are affected by dozens of interacting variables. A satellite passes overhead, and you need to predict whether the link will hold for the next few minutes. That depends on: the orbital geometry (elevation angle changes constantly), tropospheric attenuation (humidity affects signal loss via ITU-R P.676), rain fade (calculated via ITU-R P.618 - rain rates in mm/hr translate directly to dB of loss at Ka-band and above), ionospheric scintillation (we track the KP index from magnetometer networks), and network congestion on top of all that.

The traditional approach is reactive. Operators watch dashboards, and when SNR drops below a threshold, they manually reroute traffic or switch to a backup link. With 10,000 satellites in orbit today and 70,000+ projected by 2030, this doesn't scale. Our system ingests telemetry at around 100,000 messages per second from satellites, ground stations, weather radar, IoT humidity sensors, and space weather monitors. We run physics-based models in real-time - the full link budget equations, ITU atmospheric standards, orbital propagation - to compute what should be happening. Then we layer ML models on top, trained on billions of data points from actual multi-orbit operations.

The ML piece is where it gets interesting. We use federated learning because constellation operators (understandably) don't want to share raw telemetry. Each constellation trains local models on their own data, and we aggregate only the high-level patterns. This gives us transfer learning across different orbit types and frequency bands - learnings from LEO Ka-band links help optimize MEO or GEO operations. We can predict most link failures 3-5 minutes out with >90% accuracy, which gives enough time to reroute traffic before data loss. The system is fully containerized (Docker/Kubernetes) and deploys on-premise for air-gapped environments, on GovCloud (AWS GovCloud, Azure Government), or standard commercial clouds.

Right now we're testing with defense and commercial partners. The dashboard shows real-time link health, forecasts at 60/180/300 seconds out, and root cause analysis (is this rain fade? satellite setting below horizon? congestion?). We expose everything via API - telemetry ingestion, predictions, topology snapshots, even an LLM chat endpoint for natural language troubleshooting.

The hard parts we're still working on: prediction accuracy degrades for longer time horizons (beyond 5 minutes gets dicey), we need more labeled failure data for rare edge cases, and the federated learning setup requires careful orchestration across different operators' security boundaries. We'd love feedback from anyone who's worked on satellite ops, RF link modeling, or time-series prediction at scale. What are we missing? What would make this actually useful in a production NOC environment?

Happy to answer any technical questions!

Comments

1yvino•2w ago
pretty intriguing demo video. how do you ensure your telemetry ingestion happens operationally that will be daunting task. output will be as good as your telemetry any delay or break in data everything bound break.
kmajid•2w ago
Great point, telemetry reliability is the biggest hurdle for any mission-critical system. We address the "garbage in, garbage out" risk by prioritizing freshness (our pipeline treats latency as a failure).

We use a

"leaky" buffer strategy (if data is too old to be actionable for a 3-minute forecast, we drop it to ensure the models aren't lagging behind the physical reality of the link),

graceful degradation (when telemetry is delayed or broken, the system automatically falls back to physics-only models i.e. orbital propagation and ITU standards), and

edge validation (we validate and normalize data at the ingestion point, if a stream becomes corrupted or "noisy," the system flags that specific sensor as unreliable and adjusts the prediction confidence scores in real-time).

storystarling•2w ago
I'm wondering how the physics models handle the state discontinuity if you're dropping intermediate telemetry. Typically those propagators rely on continuous integration steps, so if the buffer leaks data to catch up, I'd expect significant drift unless you're constantly re-seeding the state vector. How do you manage the handoff between the dropped data and the physics fallback without a jump in the prediction?
kmajid•1w ago
We prevent discontinuities by using a Continuous Extended Kalman Filter where the physics model serves as the persistent baseline and telemetry acts only as a corrective update. When the buffer leaks, the system doesn't snap to a new position; instead, it continues propagating the state via physics while the uncertainty covariance grows smoothly. When fresh data eventually arrives, we use the innovation delta to gradually steer the state back to reality, ensuring a seamless transition rather than a coordinate jump.
free_energy_min•2w ago
Very cool company! Are y’all hiring?
kmajid•2w ago
Not right now but we will be soon! Send over your resume to hello@constellation-io.com if you're interested in joining.
JumpCrisscross•2w ago
Are you raising?
kmajid•2w ago
Not currently, we're planning on opening up our seed round in 4 weeks, feel free to shoot us a note at hello@constellation-io.com if you're interested in learning more.
JumpCrisscross•2w ago
Done (XX:56).
infinitewars•2w ago
Do you plan to work on orbital weapon systems like Golden Dome?
kmajid•2w ago
We're big believers in American Dynamism.
dylan604•2w ago
you could have used a one word answer, yes. the extra words could have been "if we can get it".

in other words, you're not opposed to working in the military industrial complex. your reply walks the line of weasel words. trying not to offend those against while nodding to those that approve. you'll do fine as a spokeperson

kmajid•2w ago
You get it!
infinitewars•2w ago
American Dynamism is a term the investors of Casteleon made up. That's the company, founded by SpaceX executives, that's mass producing hypersonic weapons to put into orbit.
verzali•2w ago
Rather than longer times, what about short times? I did some work on fast fading and you can see rapid swings in fade over <5s. That is hard for automated systems to respond to, so you normally respond by increasing the link margin. If you can predict this you could reduce the margin needed. That could potentially be very valuable.
kmajid•2w ago
Spot on. We categorize that <5s window as tactical fade mitigation.

Our current 3-5m window is for topology/routing, but the sub-5s window is for Dynamic Link Margin (DLM). If we can predict fast-fading signatures—like tropospheric scintillation or edge-of-cloud diffraction, we can move from reactive to proactive ACM.

dsrtslnd23•2w ago
Is the inference running on-orbit or ground-side? I guess SWaP is a major constraint for the former. Not sure if you are using FPGAs or something like a Jetson?
kmajid•2w ago
Primary inference runs ground-side (K8s/GovCloud) to aggregate global data for routing. We do see the need for something like a hybrid-edge approach for tactical, sub-5s mitigation. We would target FPGAs (like Xilinx Versal) for production flight hardware to meet strict SWaP and radiation-hardening requirements.