frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory

https://github.com/localgpt-app/localgpt
125•yi_wang•4h ago•35 comments

Haskell for all: Beyond agentic coding

https://haskellforall.com/2026/02/beyond-agentic-coding
53•RebelPotato•3h ago•10 comments

SectorC: A C Compiler in 512 bytes (2023)

https://xorvoid.com/sectorc.html
247•valyala•12h ago•49 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
165•surprisetalk•11h ago•155 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
195•mellosouls•14h ago•350 comments

Total surface area required to fuel the world with solar (2009)

https://landartgenerator.org/blagi/archives/127
18•robtherobber•4d ago•5 comments

Brookhaven Lab's RHIC concludes 25-year run with final collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
73•gnufx•10h ago•59 comments

LLMs as the new high level language

https://federicopereiro.com/llm-high/
62•swah•4d ago•113 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
180•AlexeyBrin•17h ago•35 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
171•vinhnx•15h ago•17 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
319•jesperordrup•22h ago•97 comments

First Proof

https://arxiv.org/abs/2602.05192
134•samasblack•14h ago•77 comments

Vouch

https://twitter.com/mitchellh/status/2020252149117313349
62•chwtutha•2h ago•10 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
82•momciloo•12h ago•16 comments

Wood Gas Vehicles: Firewood in the Fuel Tank (2010)

https://solar.lowtechmagazine.com/2010/01/wood-gas-vehicles-firewood-in-the-fuel-tank/
31•Rygian•2d ago•7 comments

Why there is no official statement from Substack about the data leak

https://techcrunch.com/2026/02/05/substack-confirms-data-breach-affecting-email-addresses-and-pho...
14•witnessme•1h ago•4 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
104•thelok•13h ago•22 comments

Show HN: A luma dependent chroma compression algorithm (image compression)

https://www.bitsnbites.eu/a-spatial-domain-variable-block-size-luma-dependent-chroma-compression-...
40•mbitsnbites•3d ago•4 comments

FDA intends to take action against non-FDA-approved GLP-1 drugs

https://www.fda.gov/news-events/press-announcements/fda-intends-take-action-against-non-fda-appro...
112•randycupertino•7h ago•233 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
577•theblazehen•3d ago•208 comments

Homeland Security Spying on Reddit Users

https://www.kenklippenstein.com/p/homeland-security-spies-on-reddit
59•duxup•1h ago•13 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
304•1vuio0pswjnm7•18h ago•482 comments

I write games in C (yes, C) (2016)

https://jonathanwhiting.com/writing/blog/games_in_c/
189•valyala•12h ago•173 comments

Microsoft account bugs locked me out of Notepad – Are thin clients ruining PCs?

https://www.windowscentral.com/microsoft/windows-11/windows-locked-me-out-of-notepad-is-the-thin-...
144•josephcsible•10h ago•178 comments

Selection rather than prediction

https://voratiq.com/blog/selection-rather-than-prediction/
34•languid-photic•4d ago•15 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
233•limoce•4d ago•125 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
904•klaussilveira•1d ago•276 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
150•speckx•4d ago•235 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
303•isitcontent•1d ago•39 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
118•onurkanbkrc•16h ago•5 comments
Open in hackernews

When memory was measured in kilobytes: The art of efficient vision

https://www.softwareheritage.org/2025/06/04/history_computer_vision/
155•todsacerdoti•8mo ago

Comments

alightsoul•8mo ago
Amazing. Wonder how fast it would be on a modern computer
Hydration9044•8mo ago
+1, which is faster when compare to OpenCV findContours
kmoser•8mo ago
I want to believe that however obsolete these old algorithms are today, at least some aspects of the underlying code and/or logic should prove useful to LLMs as they try to generate modern code.
klodolph•8mo ago
Maybe… some of these algorithms from the 1980s struggled to do basic OCR, so they may need a lot of modification to be useful.
PaulHoule•8mo ago
That whole approach of "find edges, convert to line drawing, process a line drawing" in the 1980s struggled to do anything at all.
Retric•8mo ago
There was a surprising amount of useful OCR happening in the 70’s.

High error rates and significant manual rescanning can be acceptable in some applications, as long as there’s no better alternative.

GuB-42•8mo ago
I find that modern OCR, audio transcription, etc... are beginning to have the opposite problem: they are too smart.

It means that they make a lot fewer mistakes, but when they do, it can be subtle. For example, if the text is "the bat escaped by the window", a dumb OCR can write "dat" instead of "bat". When you read the resulting text, you notice it and using outside clues, recover the original word. An smart OCR will notice that "dat" isn't a word and can change it for "cat", and indeed "the cat escaped by the window" is a perfectly good sentence, unfortunately, it is wrong and confusing.

devilbunny•8mo ago
Thankfully, most speech misrecognition events are still obvious. I have seen this in OCR and, as you say, it is bad. There are enough mistakes in the sources; let us not compound them.
taeric•8mo ago
I'm not sure I can sign on to this. In particular, this sounds kind of like an indictment of many algorithms. But, how many where there? And did any go on to give good results?

Considers, OCR was a very new field, such that a lot of the struggle was getting data into a place you could even try recognition against it. It should be no surprise that they were not able to succeed that often. It would be more surprising if they had a lot of different algorithms.

monkeyelite•8mo ago
The idea that ML is the only way to do computer vision is a myth.

Yes, it may not make sense to use classical algorithms to try to recognize a cat in a photo.

But there are often virtual or synthetic images which are produced by other means or sensors for which classical algorithms are applicable and efficient.

thatcat•8mo ago
Any recommendations on background reading for classical CV for radar?
monkeyelite•8mo ago
I don’t know anything about radar. I have a book called “machine vision” (Shmuck, Jain, Kasturi) easy undergrad level, but also very useful. It’s $6 on Amazon.
ipunchghosts•8mo ago
Kasturi was my undergraduate honors advisor!
monkeyelite•8mo ago
Small world! These are always just names on a book to me.
thatcat•8mo ago
Awesome, thanks!
sceadu•8mo ago
Don't know about radar but here's a good book on classical CV https://udlbook.github.io/cvbook/

even though I think Simon admits that most of it is obsolete after DL computer vision came about

monkeyelite•8mo ago
> is obsolete after DL computer vision came about

I just don’t understand this. Why would new technology invalidate real understanding and useful computer algorithms?

sokoloff•8mo ago
I worked (as an intern) on autonomous vehicles at Daimler in 1991. My main project was the vision system, running on a network of transputer nodes programmed in Occam.

The core of the approach was “find prominent horizontal lines, which exhibit symmetry about a vertical axis, and frame-to-frame consistency”.

Finding horizontal lines was done by computing variances in value. Finding symmetry about a vertical axis was relatively easy. Ultimately, a Kalman filter worked best for frame-to-frame tracking. (We processed video in around 120x90 output from variance algorithm, which ran on a PAL video stream.)

There’s probably more computing power on a $10 ESP32 now, but I really enjoyed the experience and challenge.

This was our vehicle: https://mercedes-benz-publicarchive.com/marsClassic/en/insta...

digdugdirk•8mo ago
That's awesome! What kind of hardware was needed to pull that off? And was the size of the bus any indication of the answer?
godelski•8mo ago
You could even argue that ML does classical vision in addition to other stuff.

CNNs learn gabor filters. The AlexNet paper even shows this [0]

Or if you look at the work ViT built itself on, they show attention heads will also learn these fillers. [1] That's actually a big part of how ViTs work, the heads integrate this type of information

[0] https://papers.nips.cc/paper_files/paper/2012/hash/c399862d3...

[1] https://arxiv.org/abs/1911.03584

cyberax•8mo ago
One approach that blew my mind was the use of FFT to recognize objects.

FFT has this property that object orientation or location doesn't matter. As long as you have the signature of an object, you can recognize it anywhere!

changoplatanero•8mo ago
I believe orientation still matters but you’re right that position doesn’t.
Legend2440•8mo ago
FFT is equivalent to convolution, which is widely used today for object recognition in CNNs.
bobmcnamara•8mo ago
> FFT is equivalent to convolution

What do you mean by that? Could you give me an example?

timewizard•8mo ago
The basic convolution theorem.

https://en.wikipedia.org/wiki/Convolution_theorem

bobmcnamara•8mo ago
That is something else entirely.
timewizard•8mo ago
Then if you know what the OP meant why did you ask?
Grimblewald•8mo ago
because they made a nonsensical claim that doesn't align with my (and likely their) understanding of what the FT is and does.

The FT is _NOT_ just a convolution, but under certain conditions a specific operation on FT terms is equivalent to a convolution.

bobmcnamara•8mo ago
I didn't know what they meant. There are so many FFT tricks. I was hoping this was another.
kragen•8mo ago
The FFT, composed with pointwise multiplication, composed with the inverse FFT, is equivalent to convolution. The FFT is not.
mrheosuper•8mo ago
I still deal with <128kb ram system everyday
weareregigigas•8mo ago
I too need a coffee in the morning before I can do anyhting
DaSHacka•8mo ago
Ah, Mac user?
mrheosuper•8mo ago
more like STMicroelectronics user