frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
50•thelok•3h ago•6 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
117•AlexeyBrin•6h ago•20 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
811•klaussilveira•21h ago•246 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
49•vinhnx•4h ago•7 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
91•1vuio0pswjnm7•7h ago•102 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
73•onurkanbkrc•6h ago•5 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1054•xnx•1d ago•601 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
471•theblazehen•2d ago•174 comments

U.S. Jobs Disappear at Fastest January Pace Since Great Recession

https://www.forbes.com/sites/mikestunson/2026/02/05/us-jobs-disappear-at-fastest-january-pace-sin...
49•alephnerd•1h ago•15 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
197•jesperordrup•11h ago•68 comments

Selection Rather Than Prediction

https://voratiq.com/blog/selection-rather-than-prediction/
8•languid-photic•3d ago•1 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
9•surprisetalk•1h ago•2 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
537•nar001•5h ago•248 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
206•alainrk•6h ago•313 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
33•rbanffy•4d ago•6 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
26•marklit•5d ago•1 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
110•videotopia•4d ago•30 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
69•speckx•4d ago•71 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
63•mellosouls•4h ago•70 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
21•sandGorgon•2d ago•11 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
271•isitcontent•21h ago•36 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
199•limoce•4d ago•110 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
284•dmpetrov•21h ago•153 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
155•matheusalmeida•2d ago•48 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
553•todsacerdoti•1d ago•267 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
424•ostacke•1d ago•110 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
467•lstoll•1d ago•308 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
348•eljojo•1d ago•214 comments

Ga68, a GNU Algol 68 Compiler

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
41•matt_d•4d ago•16 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
367•vecti•23h ago•167 comments
Open in hackernews

Gemini 3.0 Deciphered the Mystery of a Nuremberg Chronicle Leaf's

https://blog.gdeltproject.org/gemini-as-indiana-jones-how-gemini-3-0-deciphered-the-mystery-of-a-nuremberg-chronicle-leafs-500-year-old-roundels/
56•kilroy123•1mo ago

Comments

jgeralnik•1mo ago
I think there’s something very interesting here and would be interested in hearing more about the date discrepancies- it’s a shame the article is mostly just the raw output of gemini instead of more commentary
game_the0ry•1mo ago
This is how you use AI.
KaiserPro•1mo ago
I mean its not.

There isn't verification, and its based on the assertion that this marginalia is a mystery. None of which appears to be backed up.

It then doesn't actually do any analysis of the output, any verification, just pastes the dumps at the end, with no attempt to make it readable.

layer8•1mo ago
For populating blogspam?
game_the0ry•1mo ago
This is not how you use AI.
BigTTYGothGF•1mo ago
"several experts who reviewed the page were unable to discern their meaning and thus their purpose had remained elusive"

I find this a little hard to believe.

brokensegue•1mo ago
why?
why-o-why•1mo ago
Because the experts weren't cited, and without provenance or review this is just "fancy slop".
brokensegue•1mo ago
this isn't a peer reviewed journal article though. do you have expertise in the field to say whether an expert would be able to decipher this?
SirSavary•1mo ago
The book is written in Latin, not exactly a dead language.
brokensegue•1mo ago
That isn't what the claim is about. I mean I don't think the source is particularly convincing but the claim is that it figured out the significance of the text not the literal meaning of the words
roywiggins•1mo ago
The interesting claim is that this would be hard for an expert to do, which is basically unsupported outside of anonymous experts who spent an unknown amount of time on the question. It also doesn't quote any experts on whether Gemini's conclusions are reasonable.
smallnix•1mo ago
I know experts who deciphered this. But I will not tell you their names.
why-o-why•1mo ago
I don't think you understand the word "knowledge". Just because an LLM spews out an answer doesn't mean it is correct. It needs to be verified by experts in the field, that's how it becomes factual knowledge. Lord help us that I need to explain this.
brokensegue•1mo ago
I don't think this particular discussion has to do with the idea of knowledge. We're discussing whether human experts had previously deciphered the sections
why-o-why•1mo ago
We are discussing whether the LLM deciphering is accurate. That hasn't been demonstrated as measured by review, provenance, cross-referencing among historical record, etc.
dhedberg•1mo ago
I make no judgement on this particular claim, I have not checked it out.

But what immediately comes to mind from reading the title are all the "AI solutions" for the as-of-yet undecoded voynich manuscript that are posted with surprising (and increasing) frequency to at least one forum. They're all incompatible and fall apart on closer inspection.

A collection of them can be found at https://www.voynich.ninja/forum-59.html .

ajross•1mo ago
One probably important distinction is that the Voynich manuscript was deliberately obfuscated. Puzzling it out requires context that may not even exist anymore (consider discovering an intact TLS log a thousand years in the future, without the private cert you'd never know it was just someone posting to HN!).

The notes in the linked article are presumptively-legible notes made in good faith, just not with enough detail for someone-who-is-not-the-author to understand . AI training sets are much broader than mere human intuition now.

fresh_broccoli•1mo ago
I find it worrying that this was upvoted so much so quickly, and HN users are apparently unable to spot the glaring red flags about this article.

1. Let's start with where the post was published. Check what kind of content this blog publishes - huge volumes of random low-effort AI-boosting posts with AI-generated images. This isn't a blog about history or linguistics.

2. The author is anonymous.

3. The contents of the post itself: it's just raw AI output. There's no expert commentary. It just mentions that unnamed experts were unable to do the job.

This isn't to say that LLMs aren't useful for science; on the contrary. See for example Terence Tao's blog. Notice how different his work is from whatever this post is.

OGEnthusiast•1mo ago
Given how quickly it got upvoted, I also wonder how much of the upvoting itself may be from AI bots.
roywiggins•1mo ago
I'm especially suspicious of the handwriting analysis. It seems like the kind of thing a vLLM would be pretty bad at doing and very good at convincingly faking for non-experts.

Gemini 3 Pro, eg, fails very badly at reading the Braille in this image, confusing the English language text for the actual Braille. When you give it just the Braille, it still fails and confidently hallucinates a transcription badly enough that you don't even have to know Braille (I don't!) to see it's wrong.

https://m.facebook.com/groups/dullmensclub/posts/18885933484...

As far as I can tell, Gemini 3 Pro is still completely out of its depth and incapable of understanding Braille at all, and doesn't realize this.

steve-atx-7600•1mo ago
Happens too often these days. Also, express an unpopular opinion and get downvoted.
macinjosh•1mo ago
You are anonymous too, so…
fresh_broccoli•1mo ago
I am not announcing a scientific breakthrough.
frizlab•1mo ago
My first reflex when I see anything “solved” by AI is to go straight to the comments. This time again, I was not disappointed.
jonplackett•1mo ago
I feel sorry for people having to read the internet without the HN comments
ahofmann•1mo ago
That was said about reddit some years ago and now reddit is clearly riddled with astroturfing and other manipulations. We don't know how big the problem on hn already is and how bad it will get. But it would be naive to think, that it doesn't happen here.
jonplackett•1mo ago
True. Sometimes weird links with very few upvotes magically end up in the top 10. But the comments usually bring them back to earth!

The most real benefit of HN vs Reddit is commenters who are actually knowledgeable in that field, who leave a comment or vote up an actually useful comment.

zozbot234•1mo ago
This is literally a "my two cents' worth" answer from Gemini Pro. It's a straightforward inference from the fact that "Anno Mundi" means "in the year of the world", thus especially in the "year since creation", and that the main text references Abraham's birth with conflicting dates. It's nifty that we now have automated means of extracting a sensible scholarly consensus of "what could this possibly mean" but there's absolutely no mystery here.
KaiserPro•1mo ago
tldr gemini asserts they are converted dates.

My colleagues do this as well with AI and it fucks me right off.

They just present the raw output, in its long form an expect everyone to follow the flow. Context is everything, damn it.

Looking into it further there isn't really a mystery as to what they are, or at least none that I could find suggesting that its unknown. Especially given the context of the page.

Its great that gemini can do this, its a shame that lots of the ancillary "analysis" about the writing doesn't appear to be correct (humanist minscule I would suggest is too new, too heathen and too Italian for a german manuscript of the time https://medievalwritings.atillo.com.au/whyread/paleographysu...)

jasonvorhe•1mo ago
Some data magicians unlocked the secrets of the Oculists are while back, which got me to finally dig into some occult literature and various secret societies. Hope this does the same for others.