frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory

https://github.com/localgpt-app/localgpt
199•yi_wang•7h ago•78 comments

Haskell for all: Beyond agentic coding

https://haskellforall.com/2026/02/beyond-agentic-coding
93•RebelPotato•6h ago•24 comments

Roger Ebert Reviews "The Shawshank Redemption"

https://www.rogerebert.com/reviews/great-movie-the-shawshank-redemption-1994
17•monero-xmr•3h ago•4 comments

DoNotNotify is now Open Source

https://donotnotify.com/opensource.html
6•awaaz•1h ago•3 comments

SectorC: A C Compiler in 512 bytes (2023)

https://xorvoid.com/sectorc.html
284•valyala•15h ago•55 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
223•mellosouls•17h ago•379 comments

LLMs as the new high level language

https://federicopereiro.com/llm-high/
95•swah•4d ago•175 comments

The Architecture of Open Source Applications (Volume 1) Berkeley DB

https://aosabook.org/en/v1/bdb.html
22•grep_it•5d ago•2 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
180•surprisetalk•14h ago•181 comments

LineageOS 23.2

https://lineageos.org/Changelog-31/
33•pentagrama•3h ago•7 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
189•AlexeyBrin•20h ago•36 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
191•vinhnx•18h ago•19 comments

Brookhaven Lab's RHIC concludes 25-year run with final collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
79•gnufx•13h ago•62 comments

uLauncher

https://github.com/jrpie/launcher
19•dtj1123•4d ago•0 comments

Substack confirms data breach affects users’ email addresses and phone numbers

https://techcrunch.com/2026/02/05/substack-confirms-data-breach-affecting-email-addresses-and-pho...
49•witnessme•4h ago•14 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
353•jesperordrup•1d ago•104 comments

Wood Gas Vehicles: Firewood in the Fuel Tank (2010)

https://solar.lowtechmagazine.com/2010/01/wood-gas-vehicles-firewood-in-the-fuel-tank/
45•Rygian•2d ago•16 comments

Moroccan sardine prices to stabilise via new measures: officials

https://maghrebi.org/2026/01/27/moroccan-sardine-prices-to-stabilise-via-new-measures-officials/
3•mooreds•5d ago•0 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
97•momciloo•15h ago•23 comments

First Proof

https://arxiv.org/abs/2602.05192
143•samasblack•17h ago•87 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
600•theblazehen•3d ago•218 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
112•thelok•16h ago•25 comments

The Scriptovision Super Micro Script video titler is almost a home computer

http://oldvcr.blogspot.com/2026/02/the-scriptovision-super-micro-script.html
10•todsacerdoti•6h ago•1 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
335•1vuio0pswjnm7•21h ago•542 comments

Show HN: A luma dependent chroma compression algorithm (image compression)

https://www.bitsnbites.eu/a-spatial-domain-variable-block-size-luma-dependent-chroma-compression-...
43•mbitsnbites•3d ago•6 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
916•klaussilveira•1d ago•277 comments

FDA intends to take action against non-FDA-approved GLP-1 drugs

https://www.fda.gov/news-events/press-announcements/fda-intends-take-action-against-non-fda-appro...
123•randycupertino•10h ago•250 comments

Selection rather than prediction

https://voratiq.com/blog/selection-rather-than-prediction/
38•languid-photic•4d ago•20 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
173•speckx•4d ago•258 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
307•isitcontent•1d ago•39 comments
Open in hackernews

Line scan camera image processing for train photography

https://daniel.lawrence.lu/blog/y2025m09d21/
449•dllu•5mo ago

Comments

j_bum•5mo ago
What a beautiful example of image processing. Great post
jeffbee•5mo ago
Okay I was stumped about how this works because it's not explained, as far as I can tell. But I guess the sensor array has its long axis perpendicular to the direction the train is traveling.
miladyincontrol•5mo ago
Line scan sensors are basically just scanners, heck people make em out of scanners .

Usually the issue is they need rather still subjects, but in this case rather than the sensor doing a scanning sweep they're just capturing the subject as it moves by, keeping the background pixels static.

krackers•5mo ago
It only works for trains because the image of train at t+1 is basically image of train at time t shifted over by a few pixels, right? It doesn't seem like this would work to capture a picture of a human, since humans don't just rigidly translate in space as they move.
makeitdouble•5mo ago
If the human is running and doesn't frantically shake it decently works. There's samples of horse race finishing line pics in the article, and they look pretty good IMHO.

It falls apart when the subject is either static or moves it's limbs faster than the speed the whole subject moves (e.g. fist bumping while slowly walking past the camera would screw it)

flir•5mo ago
Depends what you're going for.

https://en.wikipedia.org/wiki/Slit-scan_photography#/media/F...

dllu•5mo ago
Thanks, I added a section called "Principle of operation" to explain how it works.
ansgri•5mo ago
What's your FPS/LPS in this setup? I've experimented with similar imaging with an ordinary camera, but LPS was limiting, and I know line-scan machine vision cameras can output some amazing numbers, like 50k+ LPS.
blooalien•5mo ago
Absolutely fascinating stuff! Thank you so much for adding detailed explanations of the math involved and your process. Always wondered how it worked but never bothered to look it up until today. Reading your page pushed it beyond idle curiosity for me. Thanks for that. And thanks also to HN for always surfacing truly interesting reading material on a daily basis!
eschneider•5mo ago
You use a single vertical line of sensors and resample "continuously". When doing this with film, the aperture is a vertical slit and you continuously advance the film during the exposure.

For "finish line" cameras, the slit is located at the finish line and you start pulling film when the horses approach. Since the exposure is continuous, you never miss the exact moment of the finish.

flir•5mo ago
The analogue equivalent (a slit scan camera) is easier to understand, I think https://www.lomography.com/magazine/283280-making-a-slit-sca... https://petapixel.com/2017/10/18/role-slit-scan-image-scienc...

You can also get close in software. Record some video while walking past a row of shops. Use ffmpeg to explode the video into individual frames. Extract column 0 from every frame, and combine them into a single image, appending each extracted column to the right-hand-side of your output image. You'll end up with something far less accurate than the images in this post, but still fun. Also interesting to try scenes from movies. This technique maps time onto space in interesting ways.

djmips•5mo ago
https://www.youtube.com/watch?v=Ut0nKdLCAEo
whartung•5mo ago
These are amazing images. I don't understand what's going on here, but I do like the images.
Etheryte•5mo ago
Imagine a camera that only takes pictures one pixel wide. Now make it take a picture, for example, 60 times a second and append every pixel-wide image together in order. This is what's happening here, it's a bunch of one pixel wide images ordered by time. The background stays still as it's always the same area captured by that one pixel, resulting in the lines, but moving objects end up looking correct as they're spread out over time.

At first, I thought this explanation would make sense, but then I read back what I just wrote and I'm not sure it really does. Sorry about that.

kiddico•5mo ago
It made sense to me!
JKCalhoun•5mo ago
Yeah, like walking past a door that's cracked just a bit so you can see into an office only a slit. Now reconstruct the whole office from that traveling slit that you saw.

Very cool.

whartung•5mo ago
No, thank you. This was perfect. It completely explains where the train comes from and where the lines come from.

Lightbulb on.

Aha achieved. (Don’t you love Aha? I love Aha.)

bscphil•5mo ago
IMO the denoising looks rather unnatural and emphasizes the remaining artifacts, especially color fringe around details. Personally I'd leave that turned off. Also, with respect to the demosaic step, I wonder if it's possible to implement a version of RCD [1] for improved resolution without the artifacts that seem to result from the current process.

[1] https://github.com/LuisSR/RCD-Demosaicing

Cloudef•5mo ago
Yeah, i dont think the denoised result looks that good either
dllu•5mo ago
Yeah I actually have it disabled by default since it makes the horizontal stripes more obvious and it's also extremely slow. Also, I found that my vertical stripe correction doesn't work in all cases and sometimes introduces more stripes. Lots more work to do.

As for RCD demosaicing, that's my next step. The color fringing is due to the naive linear interpolation for the red and blue channels. But, with the RCD strategy, if we consider that the green channel has full coverage of the image, we could use it as a guide to make interpolation better.

DoctorOetker•5mo ago
When you do the demosaicing, and perhaps other steps, did you ever consider declaring the x-positions, spline parameters, ... as latent variables to estimate?

Consider a color histogram, then the logo (showing color oscillations) would have a wider spread and lower peaked histogram versus a correctly mapped (just the few colors plus or minus some noise) which would show a very thin but strong peak in colorspace. A a high-variance color occupation has higher entropy compared to a low-variance strongly centered peak (or multipeak) distribution.

So it seems colorspace entropy could be a strong term in a loss function for optimization (using RMAD).

DoctorOetker•5mo ago
Do you share some of the original raw recordings somewhere?
GlibMonkeyDeath•5mo ago
If you like this sort of thing, check out https://www.magyaradam.com/wp/ too. A lot of his work uses a line scan camera.
JKCalhoun•5mo ago
The video [https://www.magyaradam.com/wp/?page_id=806] blew my mind. I can only image he reconstructed the video by first reconstructing one frame's worth of slits — then shifting them over by one column and adding the next slit data.
fudged71•5mo ago
None of the shots in that video are using Slit Scan technique. It’s using a technique called Mean Stack Mode to get the average pixel value across multiple frames, over a rolling selection of an input video.
JKCalhoun•5mo ago
Reminds me of the early experiments with using a flat-bed scanner as a digital back. Here is one: https://www.sentex.net/~mwandel/tech/scanner.html
ncruces•5mo ago
That's a lot more than I thought I'd want to know about this, but I was totally nerd sniped. Great writeup.
card_zero•5mo ago
It's neat that it captured the shadow of the subway train, too, which arrived just ahead of the train itself. This virtual shadow is thrown against a sort of extruded tube with the profile of the slice of track and wall that the slit was pointed at.
its-summertime•5mo ago
> Hmm, I think my speed estimation still isn’t perfect. It could be off by about 10%.

Probably would be worth asking a train driver about this, e.g. "what is a place with smooth track and constant speed"

tecleandor•5mo ago
Maybe an optical flow sensor to estimate speed in real time?
jebarker•5mo ago
Or a radar gun to measure it
lttlrck•5mo ago
They have an amazing painterly quality. I'm not a huge train fan but I'd put some of these on my wall.
bkettle•5mo ago
Wow, great article. I love the cable car photo https://upload.wikimedia.org/wikipedia/commons/e/e0/Strip_ph...

Must be somewhat interesting deciding on the background content, too.

anonu•5mo ago
Super cool. I wonder if you could re-use a regular 2-d CMOS digital camera sensor to the same effect. But now I realize your sensor is basically 1-D and has a 95khz sampling rate. At the same rate with a 4k sensor you'd have way too much data to store and would need to throw most of it away.
SJC_Hacker•5mo ago
Pretty sure could do it but it would be very expensive, because you'd need alot more very fast ADCs.

Like if the camera is $5k, in order to get that exposure time in full-field you would need to duplciate the hardware 800 times or whatever you wanted horizontal resolution to be. Thats alot of zeros for a single camera

gruntwork•5mo ago
Pretty sure it is doable with consumer cameras, although of course matching the physical movement would be a lot harder. For instance, a Sony a7R IV has a 1/20s readout. And you see that with electronic shutter, because the camera scans from top to bottom. Which for video is bad. But that does mean that you can record 10fps full-frame compressed raw photos, over a horizontal resolution of 6336 pixels. So that would be an “acquisition rate” of 63khz.

The problem of course being that you need to shift the camera by one sensor width every tenth of a second, accurate to the pixel, if you want to make use of that full horizontal temporal resolution. And I’m not sure how you match together the 1/20s readout with all of that. So pessimistically, maybe only ~30khz.

Actually, did the math and if you can accept video compression, the video modes might be sufficient. 4K@30fps looks like ~64khz. And if you had a more capable video camera, that could be 4-8 times better.

SJC_Hacker•5mo ago
Perhaps I misunderstood the original question, I thought the idea was to go full field at 95 kHz, which would be either very expensive or very crummy resolution (like 50x50 crummy), or some combination of the two. Not getting a full-field camera to work line a line scan camera, which should be possible but would require some rewiring or special software, you're probably better off just getting a regular line scan camera.

There is actually a way to get full field at very high frame rates and NOT ridiculous expensive, but its not sustained. I believe it involves some type of "sample and hold" with something like capacitor banks, so the digital read out can be done slowly.

syntaxing•5mo ago
Fun read! I used to work in sensor calibration, and most people take for granted how much engineering went into having phones taking good photos. There’s a nontrivial amount of math and computational photography that goes into the modern phone camera
ortusdux•5mo ago
Iirc, at the last Olympics, Omega paired a high-frequency linear display with their finish-line strip cameras. Regular cameras saw a flashing line, but the backdrop to photo-finishes was an Omega logo. Very subtle, but impressive to pull off.
meindnoch•5mo ago
See the flashing vertical bar: https://youtube.com/shorts/TSSCfnBBDR0
ortusdux•5mo ago
I looked into line cameras for a project. I think their main application is in quality control of food on conveyer belts. There are plenty of automated sorting systems that can become a bottleneck. One of the units I speced out could record an 8k pixel line at up to 40kfps.

https://youtu.be/E_I9kxHEYYM

SJC_Hacker•5mo ago
They are used in OCT (optical coherence tomography) as well

OCT is a technique which uses IR to get "through" tissue using beam in the near infrared (roughly 950 nm, with a spread of roughly 100 nm). The return is passed through interferometer and what amounts to a diffraction grating to produce the "spread" that the line camera sees. After some signal processing (FFT is a big one), you can get the intensity at depth. If you sweep in X,Y somehow, usually deflecting the beam with a mirror, you can obtain a volumetric image like an MRI or sonogram. Very useful for imaging the eye, particularly the back of the retina where the blood vessels are.

s0rce•5mo ago
Yah, lots of neat line scan camera applications in spectroscopy. Basically any grating application. 950nm would be on the edge of where you'd implement a Si CCD for OCT as the sensitivity drops as the Si is no longer absorbing. InGaAs detectors are used further in the NIR.
tcpekin•5mo ago
Satellites are also a big use case.
defrost•5mo ago
A number of the sats I worked with are single point cameras .. the satellite spins about a major axis orientated in the direction of travel, the point camera rotates with the satellite and a series of points of data are written to a line of storage as the camera points at the earth and pans across as the sat also moves forward.

Data stops being written as the sat rotates the camera away from the planet and resumes once it has rolled over enough to again point at the earth.

It may seem like a pedantic difference; a "line scan camera" is stationary while mirrors inside it spin or another mechanism causes it to "scan" a complete vertical line - perhaps all at once, perhaps as the focal point moves Vs a camera in a satellite that has no moving parts that just records a single point directly in front of the instrument .. and the entire satellite spins and moves forwards.

fleventynine•5mo ago
Does anyone know what it looks like when you use a line scan camera to take a picture of the landscape from a moving car or train? I suspect the parallax produces some interesting distortions.
notatoad•5mo ago
It’s just a blur. Like the background of the photos in this article.

You can get some cool distortions at very slow speeds, but at car or train speeds you won’t see anything

account42•5mo ago
The background in the article is not a "blur".
dllu•5mo ago
I've taken a couple of pics from a moving train...

Nankai 6000 series, Osaka:

https://i.dllu.net/nankai_19b8df3e827215a2.jpg

Scenery in France:

https://i.dllu.net/preview_l_b01915cc69f35644.png

Marseille, France:

https://i.dllu.net/preview_raw_7292be4e58de5cd0.png

California:

https://i.dllu.net/preview_raw_d5ec50534991d1a4.png

https://i.dllu.net/preview_raw_e06b551444359536.png

Sorry for the purple trees. The camera is sensitive to near infrared, in which trees are highly reflective, and I haven't taken any trains since buying an IR cut filter. Some of these also have dropped frames and other artifacts.

dddw•5mo ago
Exactly what wanted to know. Is it technically feasible to 'scan' a whole landcape of lets say an hour long trainride?
thekid314•5mo ago
I love this! I tried to apply the same idea to scan the tallest tree in New England with a drone. It didn't come out great, but I might just try again now.

Here is how it came out: https://www.daviddegner.com/wp-content/uploads/2023/09/Tree-...

It was part of this story: https://www.daviddegner.com/photography/discovering-old-grow...

cenamus•5mo ago
Fascinating perspective still!
stubish•5mo ago
Anyone know of a steam train captured in the same way? I'm interested in the effect of the parts with vertical motion such as the pistons and steam clouds, combined with the largely static body.
sverhagen•5mo ago
Those parts would appear oddly shaped, like the distorted limbs off athletes on a photo finish.
dllu•5mo ago
One day I'll muster up the motivation to bring my setup to Roaring Camp to scan those Shay geared locomotives but those moving parts will indeed appear weird and distorted.
stubish•5mo ago
I'm mostly curious if those distortions capture a sense of movement or motion in the pistons, with their regular sinusoidal beats. And no idea how steam clouds would come out. Our minds also visualize the moving parts differently to how a regular camera captures them or the eye sees them.
Waterluvian•5mo ago
The way the train sits crisply and motionlessly locked between these perfect stripes of colour gives it an incredible sense of speed.
decae•5mo ago
I have been creating animations using a similar process but with a regular camera and manually splicing the frames together. [1,2,3] The effect is quite interesting in how it forces focus on the subject reducing the background into an abstract pattern. Each 'line' is around 15px wide.

[1] https://youtube.com/shorts/VQuI1wW8hAw [2] https://youtube.com/shorts/vE6kLolf57w [3] https://youtube.com/shorts/QxvFyasQYAY

I also shot a timelapse of the Tokyo skyline at sunset and applied a similar process [4], then motion tracked it so that time is traveling across the frame from left to right[5]. Each line here is 4 pixels wide and the original animation is in 8k.

[4] https://youtu.be/wTma28gwSk0 [5] https://youtu.be/v5HLX5wFEGk

Cloudef•5mo ago
Wow that skyline sunset time lapse is beautiful. Really good idea.
djmips•5mo ago
I like this video about photo finish line camera at a horse track. https://www.youtube.com/watch?v=Ut0nKdLCAEo Maybe someone else will enjoy too.
amenghra•5mo ago
More line scan trains: https://news.ycombinator.com/item?id=35738987
fooker•5mo ago
Perhaps some of your noise issues are solvable by using a lens with a large aperture?

Photo finish lenses used to be wildly expensive and sometimes one of a kind.

dllu•5mo ago
Yeah they have a whopping 300mm f/2.0 lens for photo finish! I have been using various primes including a Samyang 135mm f/2, Voigtländer Apo Lanthar 125mm f/2.5, Voigtländer Nokton 58mm f/1.4, Voigtländer Ultron 35mm f/1.7, Myutron 50mm f/2.6, etc. The problem with a really large aperture is that it's hard to nail focus.
motorest•5mo ago
This is one of the best submissions I read in ages. Thank you for such a treat.
sans_souse•5mo ago
This brings me back to 90's arcade classic Final Fight
amelius•5mo ago
I found this other post about the same topic:

https://news.ycombinator.com/item?id=35738987

Anyway, I was looking for line-scan images of people walking down a busy street. Curious what they would look like.

julik•5mo ago
Slit scan photography is very cool. And bonus points for making trains the subject matter! Fun fact: back in the day some acquaintances of mine actually made some photos with a flatbed scanner, utilizing its moving head as a slit aperture - was a neat project.
londons_explore•5mo ago
Notable that nearly all cameras can be turned into a line scan camera if you can get your software low level enough to send commands to write the registers on the sensor.

You simply set the maximum and minimum readout rows to be 1 apart, and suddenly your 'frame' rate goes up to 60,000 FPS where each frame is only a pixel high.

You might have to fiddle with upper and lower 'porch' regions to make things fast too.

You must have the line along the long dimension of the image - the hardware has no capability to do the short edge.

decae•5mo ago
How is this possible? What sort of camera can do this?
londons_explore•5mo ago
Almost any camera. Eg. The OV2640.

But you need to have really low level access to registers. Said registers are normally configured by i2c (aka SCCB).

In Linux I think you'd need to patch a driver to do it for example.

decae•5mo ago
That sounds like a lot of fun to play around with, thank you!
ripe•5mo ago
Intrigued, I looked into the basics of "line-scan vision systems".

TIL about an industrial inspection application where your line camera is scanning objects passing by on a conveyor. Since you can never guarantee a rock-steady conveyor speed, you need real-time control of the scanning speed based on the current conveyor speed (using encoders) [1]

I see that the bulk of the article is about somehow using math for estimating the train speed so that the scanning can be interpreted correctly.

[1] this camera vendor has an explanatory video that explains the need for an encoder around 4:15. https://m.youtube.com/watch?v=E_I9kxHEYYM&t=35s&pp=2AEjkAIB

srean•5mo ago
HN'er jo-m https://news.ycombinator.com/user?id=jo-m

has a project

https://trains.jo-m.ch/#/trains/list

that deserves a mandatory mention.

owenversteeg•5mo ago
These have a beautiful aesthetic quality to them that vaguely reminds me of old space photos. I wonder why the aesthetics seem so similar - I couldn't find any hints in the processing so perhaps it's from the way the line scan camera's sensor works?
iamleppert•5mo ago
Many satellite cameras have line scan sensors or push broom sensors (https://en.wikipedia.org/wiki/Push_broom_scanner) and make their image as they move in their orbit.
_giorgio_•5mo ago
How much does your camera cost?

And... are there cheaper options? ;-)

account42•5mo ago
I wouldn't have expected a bayer pattern to be used here, is that common with 1D color sensors?
SteveAlexander•5mo ago
Reminds me of Stani Michiels' Jakarta Megalopolis, producing long (very long) photographs of Jakarta's streets.

https://anagrambooks.com/jakarta-megalopolis

https://anagrambooks.com/sites/default/files/styles/slide/pu...

ca_tech•5mo ago
The reason I like line scan images is because it breaks our mental model of images. We are not looking at the image of a train. We are looking at a time series graph of what occupied a very small defined area in space.
m463•5mo ago
Everybody can (and should) try this with their phone camera.

Just put it in Panorama mode and move the camera in non-standard ways.

Like this it will work on passing trains. It will also (kind of) work out the window of a train.

works out the window of a car too. It will get confused and "compress" parts of the panorama.

It will also work vertically...

It works on tall trees - scan with the arrow from the base upwards to the top.

You can also make weird photos vertically if you keep going. Start from in front of you, up over your head and facing backwards.

Also fun is to try is facing down to the ground and walking forwards. You can get a garden path or the sidewalk or other fun panoramas. If you want, you can intentionally stick your feet in the frame and get "footsteps" along the panorama.

:)