frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
84•valyala•4h ago•16 comments

Brookhaven Lab's RHIC concludes 25-year run with final collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
23•gnufx•2h ago•14 comments

The F Word

http://muratbuffalo.blogspot.com/2026/02/friction.html
35•zdw•3d ago•4 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
89•mellosouls•6h ago•167 comments

I write games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
131•valyala•4h ago•99 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
47•surprisetalk•3h ago•52 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
143•AlexeyBrin•9h ago•26 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
95•vinhnx•7h ago•13 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
850•klaussilveira•23h ago•256 comments

First Proof

https://arxiv.org/abs/2602.05192
66•samasblack•6h ago•51 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1091•xnx•1d ago•618 comments

Show HN: A luma dependent chroma compression algorithm (image compression)

https://www.bitsnbites.eu/a-spatial-domain-variable-block-size-luma-dependent-chroma-compression-...
4•mbitsnbites•3d ago•0 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
63•thelok•5h ago•9 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
231•jesperordrup•14h ago•80 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
516•theblazehen•3d ago•191 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
93•onurkanbkrc•8h ago•5 comments

Selection Rather Than Prediction

https://voratiq.com/blog/selection-rather-than-prediction/
13•languid-photic•3d ago•4 comments

We mourn our craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
332•ColinWright•3h ago•399 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
254•alainrk•8h ago•412 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
182•1vuio0pswjnm7•10h ago•251 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
611•nar001•8h ago•269 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
35•marklit•5d ago•6 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
27•momciloo•4h ago•5 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
47•rbanffy•4d ago•9 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
124•videotopia•4d ago•39 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
96•speckx•4d ago•108 comments

History and Timeline of the Proco Rat Pedal (2021)

https://web.archive.org/web/20211030011207/https://thejhsshow.com/articles/history-and-timeline-o...
20•brudgers•5d ago•5 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
211•limoce•4d ago•117 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
32•sandGorgon•2d ago•15 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
287•isitcontent•1d ago•38 comments
Open in hackernews

Animals could easily be talking to us if we tried

https://evanverma.com/animals-could-easily-be-talking-to-us-if-we-tried
13•edverma2•3mo ago

Comments

alex_young•3mo ago
There seems to be rather little evidence to back up this claim…
muppetman•3mo ago
Some random guy on the Internet's blog post about how he thinks talking animals are nearly a thing, with zero references/evidence or anything, doesn't really seems like HN content?
danillonunes•3mo ago
And of course there's AI!
jorl17•3mo ago
Our AI systems "work" because we can derive meaning from the words that we feed into it, right? We put words in, train, and words come out. How would that exactly work with animals?

"Woof in, woof out" still means not knowing what the woof's all about.

Don't get me wrong, I have often thought about this exact question: that surely we are close to finding a way to communicate with animals or at the very least study them at greater lengths through the use of LLMs and similar systems. However, I have yet to find the exact way in which we can do this.

I'm sure we can create an LLM that mimics the expressions and behavior of animals (much like we have created LLMs that "mimic" us). But that will still give us very limited interpretability. It will definitely allow us to tinker with the inputs without needing a real animal, but that still gives us a very limited understanding of what exactly is going on.

I would definitely pour my heart and soul into such a project :)

simonpure•3mo ago
There's DolphinGemma; no microchips needed -

https://blog.google/technology/ai/dolphingemma/

fragrom•3mo ago
This is an extraordinarily hand wavy blog post.

Fusion is really simple, too, you just hook up the things and there's power!

Geee•3mo ago
Dogs can already talk using buttons and great apes can talk with sign language. This seems feasible, maybe even without a microchip, just with non-invasive reading of brain waves.
MangoToupe•3mo ago
Putting aside quibbling over what constitutes language, talking, etc, animals do clearly communicate to us and understand us (to some limit). They read our facial expressions, hear our tones, can distinguish words and names. Similarly: any pet owner who pays attention can learn to read the body language of their animal companions, their tone of voice, and sometimes even distinguish what the pet wants or how they feel from individual vocal articulations. We've managed to teach great apes to use signs to communicate to us.

All this is to say: is there value in pretending like we can "translate" to english with complex grammar? Maybe not. But it might be interesting to learn and track, say, which sort of meow is "play with me", which is "feed me", which is "I'm stressed", which is "I want another toy", which is "I'm worried about you", etc.

There have been claims of teaching dogs to use buttons to communicate complex things; some of it is easy to believe (eg I have taught my own dogs to press a button when they want to go out—relatively straightforward conditioning), but some of it might be a performance for social media. I understand the skepticism, but it's surely worth researching to what extent the dogs actually are "communicating" versus seeking specific things, or even indicating concerns or emotions to us.

This gets even more interesting with animals with complex socialization of their own: whales, dolphins, birds, etc. Domesticated animals and our close relatives already have a genetic edge in communicating with us; but intraspecies communication of animals can be opaque or literally outside our ability to hear or differentiate. Surely algorithms and automated recording/correlation could reveal the complexity of these relations.

sollewitt•3mo ago
I chose to read this as a really good satire on the Dunning-Kruger effect.
IAmBroom•3mo ago
I chose to read this as the blog of someone who should put down the vape pen once in a while.
brg•3mo ago
My opinion is that we have little to no interest in what animals, plants, or even other people are thinking. The vast majority of it would be considered crude and offensive at best.
satisfice•3mo ago
My dog already talks. She barks. And her barks mean “hey!”

What exactly is AI going to do to improve on that?

satisfice•3mo ago
Here is what my dog would say when I’m going out…

“What?!” “no no no no no” “hey HEY” “come back!”

I could not bear to hear that. Barking is better.

If you could talk to the animals— just imagine it— they would say things that were only boring, distressing, or bizarre. And they would have no comprehension of what we say to them.

clickety_clack•3mo ago
Is humanity prepared to find out that our loving dogs walk around with the thought “I can’t wait to kill this guy and take over the pack”.
OutOfHere•3mo ago
There is no training dataset to map the sensor data to thoughts. We don't know the meaning of what they're thinking. It's futile without a way to develop training data.

A very limited training dataset can perhaps be created with binary choices that the animal can be trained to communicate via physical actions. With sufficient effort, an embedding model mapping thoughts to concepts can be developed. Still, how to convert the embedding vector to text is unclear.