frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Tiny C Compiler

https://bellard.org/tcc/
51•guerrilla•1h ago•20 comments

You Are Here

https://brooker.co.za/blog/2026/02/07/you-are-here.html
36•mltvc•1h ago•31 comments

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
148•valyala•5h ago•25 comments

The F Word

http://muratbuffalo.blogspot.com/2026/02/friction.html
76•zdw•3d ago•31 comments

Brookhaven Lab's RHIC concludes 25-year run with final collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
36•gnufx•4h ago•39 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
82•surprisetalk•5h ago•89 comments

LLMs as the new high level language

https://federicopereiro.com/llm-high/
19•swah•4d ago•12 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
118•mellosouls•8h ago•231 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
156•AlexeyBrin•11h ago•28 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
864•klaussilveira•1d ago•264 comments

GitBlack: Tracing America's Foundation

https://gitblack.vercel.app/
17•martialg•49m ago•3 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
113•vinhnx•8h ago•14 comments

FDA intends to take action against non-FDA-approved GLP-1 drugs

https://www.fda.gov/news-events/press-announcements/fda-intends-take-action-against-non-fda-appro...
28•randycupertino•57m ago•29 comments

Show HN: A luma dependent chroma compression algorithm (image compression)

https://www.bitsnbites.eu/a-spatial-domain-variable-block-size-luma-dependent-chroma-compression-...
21•mbitsnbites•3d ago•1 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
73•thelok•7h ago•13 comments

First Proof

https://arxiv.org/abs/2602.05192
74•samasblack•7h ago•57 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
253•jesperordrup•15h ago•82 comments

I write games in C (yes, C) (2016)

https://jonathanwhiting.com/writing/blog/games_in_c/
156•valyala•5h ago•135 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
532•theblazehen•3d ago•197 comments

Italy Railways Sabotaged

https://www.bbc.co.uk/news/articles/czr4rx04xjpo
67•vedantnair•1h ago•53 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
38•momciloo•5h ago•5 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
98•onurkanbkrc•10h ago•5 comments

Selection rather than prediction

https://voratiq.com/blog/selection-rather-than-prediction/
19•languid-photic•3d ago•5 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
212•1vuio0pswjnm7•12h ago•320 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
42•marklit•5d ago•6 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
52•rbanffy•4d ago•14 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
273•alainrk•10h ago•452 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
129•videotopia•4d ago•40 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
648•nar001•9h ago•284 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
41•sandGorgon•2d ago•17 comments
Open in hackernews

Super-resolution of Sentinel-2 images (10M –> 5M)

https://github.com/Topping1/L1BSR-GUI
30•mixtape2025-1•6mo ago

Comments

DoctorOetker•6mo ago
pff making up details X2 in both directions... could at least have done real synthetic aperture calculations...
RF_Savage•6mo ago
Yeah...
curiousObject•6mo ago
The image sensor samples different light wavelengths with a time offset of about 250ms, as the satellite moves over the Earth.

I think that means it could be possible to enhance the resolution by using luminance data from one wavelength to make an ‘educated guess’ at the luminance of other wavelengths. It would be a more advanced version of the kind of interpolation that standard image sensor cameras do with a bayer color filter

So it seems possible to get some extra information out of the system, with a good likelihood of success, but some risk of point hallucinations.

The image sensor and filters are quite complex. Much more complicated than a simple bayer filter CCD/CMOS sensor. It is not AFAIK a moving filter, but a fixed one, however the satellite is obviously moving.

I don’t know if the ‘Super-Resolution’ technique in the OP is taking advantage of that possibility though. I agree it would be disappointing if it’s just guessing —- although perhaps a carefully well-trained ML system would still figure out how to use the available data as I’ve suggested.

the optical Multi-Spectral Instrument (MSI) samples 13 spectral bands: four bands at 10 m, six bands at 20 m and three bands at 60 m spatial resolution

Due to the particular geometrical layout of the focal plane, each spectral band of the MSI observes the ground surface at different times.

https://sentiwiki.copernicus.eu/web/s2-mission

I’m making some guesses, because I don’t understand most of the optics and camera design which that ESA page describes. For instance if anyone can explain why there’s a big ~250ms offset between measuring different light wavelengths, despite the optics and filters being fixed in place immobile relative to each other? Thank you.

The time per orbit is about 100 minutes. Sun-synchronous orbit.

Actually there are 3 satellites. The constellation is supposed to be 2, there’s currently a spare one as well. But the orbits are very widely separated, supposed to be on opposite sides of the planet, so I don’t know how much enhancement there could be from combining the images from all the satellites. And don’t know if the OP’s method even tries that.

Anyway, the folks at ESA working with Sentinel-2/Copernicus must have already thought very hard about anything they can do to enhance these images, surely?

Edit: The L1BSR project which is linked to from the OP git page does include ‘exploiting sensor overlap’! So I assume it really is doing a process similar to what I’ve suggested

RicoElectrico•6mo ago
Sentinel 2 images are not exactly lined up for different revisits of the same spot. There are minute, yet perceptible subpixel offsets. If there is sufficient aliasing in the system, it should be theoretically possible to extract information from multiple visits. However the linked repo doesn't appear to do that.
Brajeshwar•6mo ago
The topic is interesting to us as we (especially my co-founder) have done lots of research in this area, both for adapting existing and inventing new Super Resolution methods for satellite images. We kinda discussed this in detail yesterday.

Btw, we have a demo of the result of the enhancement we achieved about 2 years ago at https://demo.valinor.earth

Looking at this implementation, the noise artifacts likely stem from a few sources. The RCAN model normalizes input by dividing by 400, which is a fixed assumption about Sentinel-2’s radiometric range that doesn’t account for atmospheric variability or scene-specific characteristics. Plus, working with L1B data means you’re enhancing atmospheric artifacts along with ground features - those hazy patterns aren’t just sensor noise but actual atmospheric scattering that gets amplified during super-resolution.

Over the past 2 years, we’ve hit several walls that might sound familiar:

- Models trained on clean datasets (DIV2K, etc.) completely fall apart on real satellite imagery with clouds, shadows, and atmospheric effects.

- The classic CNN architectures like RCAN struggle with global context - they’ll sharpen a building edge but miss that it’s part of a larger urban pattern.

- Training on one sensor and deploying on another is impossible without significant degradation.

Some fixes we’ve found effective:

- Incorporate atmospheric correction directly into the SR pipeline (check out the MuS2 benchmark paper from 2023).

- Use physics-informed neural networks that understand radiative transfer.

- Multi-temporal stacking before SR dramatically reduces noise while preserving real features.

For anyone diving deep into this space, check out:

- SRRepViT (2024) - achieves similar quality to heavyweight models with only 0.25M parameters.

- DiffusionSat - the new foundation model that conditions on geolocation metadata.

- The L1BSR approach from CVPR 2023 that exploits Sentinel-2’s detector overlap for self-supervised training.

- FocalSR (2025) with Fourier-transform attention - game changer for preserving spectral signatures.

Also worth exploring is the WorldStrat dataset for training, and if you’re feeling adventurous, the new SGDM models claiming a 32x enhancement (though take that with a grain of salt for operational use).

The real breakthrough will likely come from models that jointly optimize for visual quality AND radiometric accuracy. Current models excel at one or the other, but rarely both.

If you interested in these topics, we would love to connect. We are at brajeshwar@valinor.earth and amir@valinor.earth