frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory

https://github.com/localgpt-app/localgpt
179•yi_wang•6h ago•62 comments

Haskell for all: Beyond agentic coding

https://haskellforall.com/2026/02/beyond-agentic-coding
88•RebelPotato•6h ago•22 comments

SectorC: A C Compiler in 512 bytes (2023)

https://xorvoid.com/sectorc.html
276•valyala•14h ago•54 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
215•mellosouls•16h ago•370 comments

LLMs as the new high level language

https://federicopereiro.com/llm-high/
84•swah•4d ago•158 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
174•surprisetalk•13h ago•173 comments

The Architecture of Open Source Applications (Volume 1) Berkeley DB

https://aosabook.org/en/v1/bdb.html
18•grep_it•5d ago•0 comments

LineageOS 23.2

https://lineageos.org/Changelog-31/
25•pentagrama•2h ago•1 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
185•AlexeyBrin•19h ago•35 comments

Brookhaven Lab's RHIC concludes 25-year run with final collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
77•gnufx•12h ago•60 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
184•vinhnx•17h ago•18 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
343•jesperordrup•1d ago•104 comments

Roger Ebert Reviews "The Shawshank Redemption"

https://www.rogerebert.com/reviews/great-movie-the-shawshank-redemption-1994
3•monero-xmr•2h ago•0 comments

Substack confirms data breach affects users’ email addresses and phone numbers

https://techcrunch.com/2026/02/05/substack-confirms-data-breach-affecting-email-addresses-and-pho...
38•witnessme•3h ago•10 comments

uLauncher

https://github.com/jrpie/launcher
14•dtj1123•4d ago•0 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
92•momciloo•14h ago•20 comments

First Proof

https://arxiv.org/abs/2602.05192
140•samasblack•16h ago•81 comments

Wood Gas Vehicles: Firewood in the Fuel Tank (2010)

https://solar.lowtechmagazine.com/2010/01/wood-gas-vehicles-firewood-in-the-fuel-tank/
39•Rygian•2d ago•15 comments

Vouch

https://twitter.com/mitchellh/status/2020252149117313349
90•chwtutha•4h ago•24 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
597•theblazehen•3d ago•216 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
110•thelok•16h ago•24 comments

Show HN: A luma dependent chroma compression algorithm (image compression)

https://www.bitsnbites.eu/a-spatial-domain-variable-block-size-luma-dependent-chroma-compression-...
42•mbitsnbites•3d ago•6 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
322•1vuio0pswjnm7•20h ago•529 comments

The Scriptovision Super Micro Script video titler is almost a home computer

http://oldvcr.blogspot.com/2026/02/the-scriptovision-super-micro-script.html
7•todsacerdoti•5h ago•1 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
169•speckx•4d ago•251 comments

FDA intends to take action against non-FDA-approved GLP-1 drugs

https://www.fda.gov/news-events/press-announcements/fda-intends-take-action-against-non-fda-appro...
119•randycupertino•9h ago•247 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
910•klaussilveira•1d ago•277 comments

Selection rather than prediction

https://voratiq.com/blog/selection-rather-than-prediction/
37•languid-photic•4d ago•19 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
305•isitcontent•1d ago•39 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
150•videotopia•4d ago•49 comments
Open in hackernews

$5 whale listening hydrophone making workshop

https://exclav.es/2025/08/03/dinacon-2025-passive-acoustic-listening/
105•gsf_emergency_6•1mo ago

Comments

iefbr14•1mo ago
Nice, however the mentioned github link ( https://github.com/loganwilliams/passive-acoustic-listening ) does not exist (anymore?).
turtleyacht•1mo ago
The only repo that appears searching "passive acoustic listening" is

https://github.com/Vivek-Tate/IDS-Detection-and-Exploiting-V...

throw310822•1mo ago
That's a cheap whale. I wonder how they managed to fit it in.
actionfromafar•1mo ago
Cheap workshop!
blitzar•1mo ago
The elites don't want you to know this but the whales in the ocean are free. You can take them home. I have 458 whales.
gehsty•1mo ago
A note to just be a bit careful passively monitoring ocean acoustics, it’s easy to fall foul of military / security forces, they don’t like anything that can fingerprint a vessel.

I worked on DAS acoustic monitoring for subsea power cables (to monitor cable health!), turns out they are basically a submarine detection system.

sigmoid10•1mo ago
Reminds me of how the Navy heard the OceanGate submarine implode immediately when it lost contact en route to the Titanic, but waited several days before they admitted that because at the time noone even knew they had such a system of hydrophones in place. I wonder what else they have that we don't know about. The oceans are not just unexplored as a habitat, but also as an intelligence theater.
defrost•1mo ago
Pretty sure a fair number of people knew the US Navy and others had hydrophones in place, they've always been coy about it though.

For interest:

* it's one reason we know so much about ocean tempretures and tangentially have great data on climate change being real, and

* they had some cool R&D vessels:

  FLIP was originally built to support research into the fine-scale phase and amplitude fluctuations in undersea sound waves caused by thermal gradients and sloping ocean bottoms. This acoustic research was conducted as a portion of the Navy's SUBROC program. 
~ https://en.wikipedia.org/wiki/RP_FLIP
zipy124•1mo ago
the systems are pretty public, for instance the UK tender for Atlantic Net is easy to read. And the russians have Bastion which we known well about as well.
hencq•1mo ago
I remember at the time it felt a little bit suspicious to me. Only after everyone already knew it had imploded, the navy came out to say their hyper advanced detection system for enemy submarines had of course also detected it.
sandworm101•1mo ago
That was not the first time such data was used to from and a wreck. They have released locations for things like downed airliners for years, decades. Everyone knows about SOSUS. The classified bits are its locations and exact capabilities.

https://en.wikipedia.org/wiki/SOSUS

sandworm101•1mo ago
Even biulding the equipment. There are rules about hydrophones at certain certain frequencies. Just putting the plans online might runafoul of export rules. Beware of stringing multiple hydrophones as this article suggests. Put too many on a system and you are into possible beamforming territory ... the tech used for geolocating noises underwater. The USN gets kinda twitchy about such things.

https://www.ecfr.gov/current/title-22/chapter-I/subchapter-M...

(Search for hydrophone)

emsign•1mo ago
as if they own the oceans
andai•1mo ago
I know a man who built a fish shaped vehicle, and was immediately approached by the men in black...
quietbritishjim•1mo ago
Good advice but there's a bit of a difference between a device (or even several) you can knock together yourself and throw out of the side of a (surface) boat vs access to a whole undersea cable which (I have just learned) is what you need for DAS. Plus, if you can do it yourself with virtually no resources, it's a safe bet that any potential adversaries are already doing something many orders of magnitude greater.

Supposedly new submarines are so quiet that they can't be detected anyway. I'm sure there's a large element of exaggerating abilities here, but there's definitely an element of truth: in 2009, two submarines carrying nuclear weapons (not just nuclear powered) collided, presumably because they couldn't detect each other. If a nuclear submarine cannot detect another nuclear submarine right next to it then it's unlikely your $5 hydrophone will detect one at a distance.

Of course, none of this means that the military will be rational enough not to be annoyed with you.

[1] https://en.wikipedia.org/wiki/HMS_Vanguard_and_Le_Triomphant...

seydar•1mo ago
I used to be a submariner and now work in an unrelated acoustic space (acoustic analysis of the electric grid), but I'd love to learn more about the DAS world — my email is in my profile.
anfractuosity•1mo ago
Acoustic analysis of the electric grid sounds interesting, is that to detect things like sparks?
psc•1mo ago
DAS has really been taking off in the marine bioacoustics world!

https://www.birds.cornell.edu/home/deep-listening/

https://depts.washington.edu/uwb/revolutionizing-marine-cons...

Very cool and very powerful technology, it'll be interesting to see how fiber sensing progresses, especially with how much undersea fiber already exists. For subsea power cables, is there a parallel fiber dedicated just for DAS monitoring? Do these get bundled in with data fiber runs as well? I've been curious how well DAS can work over actively lit / in-service fiber.

gehsty•1mo ago
On the cables I worked on they would use a separate fibre, but power cables tended to overspec the number of fibers massively so was never an issue. Some even have two bundles of fibers.

A supplier played whale song they recorded from cables, and said they repackage and sell the same product to defense contractors.

wzdd•1mo ago
Last Chance to See has a fun bit about listening for dolphins in the Yangtze by taking a regular microphone and putting a condom over it. Always wondered how they sealed the end.
micw•1mo ago
Must just be long enough ^^
bcraven•1mo ago
https://www.hbmpodcast.com/podcast/hbm088-riptides-and-a-sin...

Information here from a superb podcast

gnatman•1mo ago
what if you tied it tightly beneath the mic with a rubber band, and then attached a weight 2-3 feet down the cord? that way the condom/mic floats end down and keeps things dry.
unwind•1mo ago
TIL about "plug-in power", that seems to be a thing that some sound recording devices with 3.5 mm "phono" jacks can provide.

Here [1] is a page at Klover, and here [2] is one at Shure. Not sure if there's a formal specification for this, or if it's just something that manufacturers started doing.

[1]: https://www.kloverproducts.com/blog/what-is-plugin-power

[2]: https://service.shure.com/s/article/difference-between-bias-...

emsign•1mo ago
Røde has the VXLR series of 3.5 jack to XLR adaptors, one of them supports plug in power.
micw•1mo ago
Cool, I wish I had seen this before we went for whale watching to the Azores last year.

Can we now have lot of audio records with a documentation of whale behavior to train an AI and get a whale-translator at the end?

fleahunter•1mo ago
The most interesting bit here to me isn’t the $5 or the DIY, it’s that this is quietly the opposite of how we usually “do” sensing in 2025.

Most bioacoustics work now is: deploy a recorder, stream terabytes to the cloud, let a model find “whale = 0.93” segments, and then maybe a human listens to 3 curated clips in a slide deck. The goal is classification, not experience. The machines get the hours-long immersion that Roger Payne needed to even notice there was such a thing as a song, and humans get a CSV of detections.

A $5 hydrophone you built yourself flips that stack. You’re not going to run a transformer on it in real time, you’re going to plug it into a laptop or phone and just…listen. Long, boring, context-rich listening, exactly the thing the original discovery came from and that our current tooling optimizes away as “inefficient”.

If this stuff ever scales, I could imagine two very different futures: one is “citizen-science sensor network feeding central ML pipelines”, the other is “cheap instruments that make it normal to treat soundscapes as part of your lived environment”. The first is useful for papers. The second actually changes what people think the ocean is.

The $5 is important because it makes the second option plausible. You don’t form a relationship with a black-box $2,000 research hydrophone you’re scared to break. You do with something you built, dunked in a koi pond, and used to hear “fish kisses”. That’s the kind of interface that quietly rewires people’s intuitions about non-human worlds in a way no spectrogram ever will.

jasonjayr•1mo ago
Cheap sensors, used by many, is how we get more reproducability, more citizen science, and more understanding of the world around us.

RTL-SDR is another area where this there is so much to see 'hidden' in electromagnetic radio frequency space.

zimpenfish•1mo ago
> You’re not going to run a transformer on it in real time

Why not? You can run BirdNET's model live in your browser[0]. Listen live and let the machine do the hard work of finding interesting bits[1] for later.

[0] https://birdnet-team.github.io/real-time-pwa/about/

[1] Including bits that you may have missed, obvs.

chankstein38•1mo ago
This is what I was going to say. My whole goal when setting up sensing projects is to eventually get it to a point that I can automate it. And I'm just a DIY dude in his house. I've been working on the detection of cars through vibrations detected by dual MPUs resonating through my house. I don't mean to imply I've had great success. I can see the pattern of an approaching car but I'm struggling to get it recognized as a car reliably and to not overcount.

But yeah, totally been doing projects like this for a long time lol not sure why OP implies you wouldn't do that. First thing I thought was "Oh man I want to put it in the lake near me and see if I can't get it detecting fish or something!"

zimpenfish•1mo ago
> First thing I thought was "Oh man I want to put it in the lake near me and see if I can't get it detecting fish or something!"

Same. Although my first effort with my hydrophone (in my parents pond) was stymied because they live on a main road and all I picked up was car vibrations.

Maybe that's your solution - get a fish tank/pond and hydrophone!

asdhtjkujh•1mo ago
Slight tangent, but does anyone have experience with recording hydrophones in excess of 192khz? Last I checked, most of these are specialty devices with high price tags.

Recording full-fidelity whale or dolphin sounds (amongst others) requires using a higher sample rate than is available in most consumer-grade equipment. There's a lot more information down there!

hecanjog•1mo ago
I've used audiomoths for this. They can be configured to record at sampling rates up to 384kHz and be deployed with waterproof cases: https://www.openacousticdevices.info/audiomoth
DoctorOetker•1mo ago
Why not use the ADC's used for CIS (contact image sensors, the linear sensors in consumer-grade scanners / photocopiers)?

For example:

12 MHz (i.e. 12 000 kHz) sample rate per channel, 4 channels, 16 bit ADC:

https://www.akm.com/eu/en/products/mfp-lbp/ak8471vn/

single digit dollar unit prices:

https://octopart.com/search?q=AK8471VN&currency=USD&specs=0

Agingcoder•1mo ago
This title is very hard to parse
psc•1mo ago
I've been working on a citizen science version of this, we have 7 hydrophones deployed that anyone can listen to live:

https://live.orcasound.net/

These hydrophones are a bit more expensive (~$1k per deployment) but still very accessible compared to how much it usually costs. And the goal is to bring the cost down to the ~$100 range (so $5 is very impressive!):

https://experiment.com/projects/can-low-cost-diy-hydrophones...

All the data is being saved (used for scientific research & ML training), with some of the hydrophones going back to 2017, and yes it's quite difficult to listen to and review so much audio. Better tools like the hydrophone explorer UI are much needed (been working on something similar).

One of the things that's surprised me the most is how difficult to keep hydrophones up and running. I can sympathize with both the technical and social challenges—underwater is not a friendly environment for electronics, and it can be difficult to get permission to deploy hydrophones. But it's incredibly rewarding when it works and you capture some cool sounds.

For anyone interested, all the code is open source and acoustic data is freely available:

Code: https://github.com/orcasound/

Data: https://registry.opendata.aws/orcasound/

Community: https://orcasound.zulipchat.com/

alexpotato•1mo ago
> Better tools like the hydrophone explorer UI are much needed (been working on something similar).

Where could I learn more about requirements for this as a I love building tools like this.

psc•1mo ago
It's actually partly built, still in beta testing (has some bugs) but here's a nice example: https://live.orcasound.net/bouts/bout_031YiYO7OqPbPhcmP3mQeb

Requirements are pretty flexible, but the inspiration is largely iNaturalist, and also this very cool project put together by Google Creative Lab back in 2019 https://patternradio.withgoogle.com/

Best place to learn more is to stop by the community Zulip chat (https://orcasound.zulipchat.com/) and ask questions, it's full of really knowledgeable people. Also you can explore the entire codebase here: https://github.com/orcasound/orcasite