frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
289•theblazehen•2d ago•95 comments

Software Engineering Is Back

https://blog.alaindichiappari.dev/p/software-engineering-is-back
20•alainrk•1h ago•11 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
34•AlexeyBrin•1h ago•5 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
15•onurkanbkrc•1h ago•1 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
717•klaussilveira•16h ago•218 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
978•xnx•21h ago•562 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
94•jesperordrup•6h ago•35 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
4•nar001•34m ago•2 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
138•matheusalmeida•2d ago•36 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
74•videotopia•4d ago•11 comments

Ga68, a GNU Algol 68 Compiler

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
16•matt_d•3d ago•4 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
46•helloplanets•4d ago•46 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
242•isitcontent•16h ago•27 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
242•dmpetrov•16h ago•128 comments

Cross-Region MSK Replication: K2K vs. MirrorMaker2

https://medium.com/lensesio/cross-region-msk-replication-a-comprehensive-performance-comparison-o...
4•andmarios•4d ago•1 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
344•vecti•18h ago•153 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
510•todsacerdoti•1d ago•248 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
393•ostacke•22h ago•101 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
309•eljojo•19h ago•192 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
361•aktau•22h ago•187 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
437•lstoll•22h ago•286 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
32•1vuio0pswjnm7•2h ago•31 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
73•kmm•5d ago•11 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
26•bikenaga•3d ago•13 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
98•quibono•4d ago•22 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
278•i5heu•19h ago•227 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
43•gmays•11h ago•14 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1088•cdrnsf•1d ago•469 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
312•surprisetalk•3d ago•45 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
36•romes•4d ago•3 comments
Open in hackernews

Super-resolution of Sentinel-2 images (10M –> 5M)

https://github.com/Topping1/L1BSR-GUI
30•mixtape2025-1•6mo ago

Comments

DoctorOetker•6mo ago
pff making up details X2 in both directions... could at least have done real synthetic aperture calculations...
RF_Savage•6mo ago
Yeah...
curiousObject•6mo ago
The image sensor samples different light wavelengths with a time offset of about 250ms, as the satellite moves over the Earth.

I think that means it could be possible to enhance the resolution by using luminance data from one wavelength to make an ‘educated guess’ at the luminance of other wavelengths. It would be a more advanced version of the kind of interpolation that standard image sensor cameras do with a bayer color filter

So it seems possible to get some extra information out of the system, with a good likelihood of success, but some risk of point hallucinations.

The image sensor and filters are quite complex. Much more complicated than a simple bayer filter CCD/CMOS sensor. It is not AFAIK a moving filter, but a fixed one, however the satellite is obviously moving.

I don’t know if the ‘Super-Resolution’ technique in the OP is taking advantage of that possibility though. I agree it would be disappointing if it’s just guessing —- although perhaps a carefully well-trained ML system would still figure out how to use the available data as I’ve suggested.

the optical Multi-Spectral Instrument (MSI) samples 13 spectral bands: four bands at 10 m, six bands at 20 m and three bands at 60 m spatial resolution

Due to the particular geometrical layout of the focal plane, each spectral band of the MSI observes the ground surface at different times.

https://sentiwiki.copernicus.eu/web/s2-mission

I’m making some guesses, because I don’t understand most of the optics and camera design which that ESA page describes. For instance if anyone can explain why there’s a big ~250ms offset between measuring different light wavelengths, despite the optics and filters being fixed in place immobile relative to each other? Thank you.

The time per orbit is about 100 minutes. Sun-synchronous orbit.

Actually there are 3 satellites. The constellation is supposed to be 2, there’s currently a spare one as well. But the orbits are very widely separated, supposed to be on opposite sides of the planet, so I don’t know how much enhancement there could be from combining the images from all the satellites. And don’t know if the OP’s method even tries that.

Anyway, the folks at ESA working with Sentinel-2/Copernicus must have already thought very hard about anything they can do to enhance these images, surely?

Edit: The L1BSR project which is linked to from the OP git page does include ‘exploiting sensor overlap’! So I assume it really is doing a process similar to what I’ve suggested

RicoElectrico•6mo ago
Sentinel 2 images are not exactly lined up for different revisits of the same spot. There are minute, yet perceptible subpixel offsets. If there is sufficient aliasing in the system, it should be theoretically possible to extract information from multiple visits. However the linked repo doesn't appear to do that.
Brajeshwar•6mo ago
The topic is interesting to us as we (especially my co-founder) have done lots of research in this area, both for adapting existing and inventing new Super Resolution methods for satellite images. We kinda discussed this in detail yesterday.

Btw, we have a demo of the result of the enhancement we achieved about 2 years ago at https://demo.valinor.earth

Looking at this implementation, the noise artifacts likely stem from a few sources. The RCAN model normalizes input by dividing by 400, which is a fixed assumption about Sentinel-2’s radiometric range that doesn’t account for atmospheric variability or scene-specific characteristics. Plus, working with L1B data means you’re enhancing atmospheric artifacts along with ground features - those hazy patterns aren’t just sensor noise but actual atmospheric scattering that gets amplified during super-resolution.

Over the past 2 years, we’ve hit several walls that might sound familiar:

- Models trained on clean datasets (DIV2K, etc.) completely fall apart on real satellite imagery with clouds, shadows, and atmospheric effects.

- The classic CNN architectures like RCAN struggle with global context - they’ll sharpen a building edge but miss that it’s part of a larger urban pattern.

- Training on one sensor and deploying on another is impossible without significant degradation.

Some fixes we’ve found effective:

- Incorporate atmospheric correction directly into the SR pipeline (check out the MuS2 benchmark paper from 2023).

- Use physics-informed neural networks that understand radiative transfer.

- Multi-temporal stacking before SR dramatically reduces noise while preserving real features.

For anyone diving deep into this space, check out:

- SRRepViT (2024) - achieves similar quality to heavyweight models with only 0.25M parameters.

- DiffusionSat - the new foundation model that conditions on geolocation metadata.

- The L1BSR approach from CVPR 2023 that exploits Sentinel-2’s detector overlap for self-supervised training.

- FocalSR (2025) with Fourier-transform attention - game changer for preserving spectral signatures.

Also worth exploring is the WorldStrat dataset for training, and if you’re feeling adventurous, the new SGDM models claiming a 32x enhancement (though take that with a grain of salt for operational use).

The real breakthrough will likely come from models that jointly optimize for visual quality AND radiometric accuracy. Current models excel at one or the other, but rarely both.

If you interested in these topics, we would love to connect. We are at brajeshwar@valinor.earth and amir@valinor.earth