frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

AI World Clocks

https://clocks.brianmoore.com/
517•waxpancake•4h ago•212 comments

30 Days, 9 Cities, 1 Question: Where Did American Prosperity Go?

https://kyla.substack.com/p/30-days-9-cities-1-question-where
29•rcardo11•1h ago•38 comments

Has Google solved two of AI's oldest problems?

https://generativehistory.substack.com/p/has-google-quietly-solved-two-of
74•scrlk•3d ago•30 comments

A race condition in Aurora RDS

https://hightouch.com/blog/uncovering-a-race-condition-in-aurora-rds
175•theanomaly•5h ago•57 comments

HipKittens: Fast and furious AMD kernels

https://hazyresearch.stanford.edu/blog/2025-11-09-hk
42•dataminer•20h ago•8 comments

Structured Outputs on the Claude Developer Platform (API)

https://www.claude.com/blog/structured-outputs-on-the-claude-developer-platform
65•adocomplete•4h ago•38 comments

All Praise to the Lunch Ladies

https://bittersoutherner.com/issue-no-12/all-praise-to-the-lunch-ladies
92•gmays•3h ago•39 comments

Manganese is Lyme disease's double-edge sword

https://news.northwestern.edu/stories/2025/11/manganese-is-lyme-diseases-double-edge-sword
104•gmays•6h ago•57 comments

Show HN: Tiny Diffusion – A character-level text diffusion model from scratch

https://github.com/nathan-barry/tiny-diffusion
69•nathan-barry•4d ago•10 comments

Awk Technical Notes (2023)

https://maximullaris.com/awk_tech_notes.html
85•signa11•1w ago•31 comments

Mentra (YC W25) Is Hiring: Head of Growth to Make Smart Glasses Mainstream

https://www.ycombinator.com/companies/mentra/jobs/2YbQCRw-make-smart-glasses-mainstream-head-of-g...
1•caydenpiercehax•2h ago

The disguised return of EU Chat Control

https://reclaimthenet.org/the-disguised-return-of-the-eus-private-message-scanning-plot
432•egorfine•5h ago•198 comments

Xqerl – Erlang XQuery 3.1 Processor

https://zadean.github.io/xqerl/
24•smartmic•3d ago•4 comments

US Tech Market Treemap

https://caplocus.com/
92•gwintrob•6h ago•39 comments

Go's Sweet 16

https://go.dev/blog/16years
29•0xedb•51m ago•1 comments

Minisforum Stuffs Entire Arm Homelab in the MS-R1

https://www.jeffgeerling.com/blog/2025/minisforum-stuffs-entire-arm-homelab-ms-r1
57•kencausey•4h ago•32 comments

SSL Configuration Generator

https://ssl-config.mozilla.org/
5•smartmic•1h ago•0 comments

Houston, We Have a Problem: Anthropic Rides an Artificial Wave – BIML

https://berryvilleiml.com/2025/11/14/houston-we-have-a-problem-anthropic-rides-an-artificial-wave/
36•cratermoon•3h ago•20 comments

Genergo: Propellantless space-propulsion system

https://www.satcom.digital/news/genergo-an-italian-company-builds-the-worlds-first-known-propella...
52•maremmano•3h ago•43 comments

Incus-OS: Immutable Linux OS to run Incus as a hypervisor

https://linuxcontainers.org/incus-os/
135•_kb•1w ago•44 comments

Bitchat for Gaza – messaging without internet

https://updates.techforpalestine.org/bitchat-for-gaza-messaging-without-internet/
291•ciconia•5h ago•147 comments

Show HN: Epstein Files Organized and Searchable

https://searchepsteinfiles.com/
154•searchepstein•3h ago•15 comments

Honda: 2 years of ml vs 1 month of prompting - heres what we learned

https://www.levs.fyi/blog/2-years-of-ml-vs-1-month-of-prompting/
272•Ostatnigrosh•4d ago•97 comments

Magit manuals are available online again

https://github.com/magit/magit/issues/5472
108•vetronauta•11h ago•40 comments

Winamp clone in Swift for macOS

https://github.com/mgreenwood1001/winamp
149•hyperbole•10h ago•108 comments

Show HN: Cj–tiny no-deps JIT in C for x86-64 and ARM64

https://github.com/hellerve-pl-experiments/cj
7•hellerve•1w ago•0 comments

Germany to ban Huawei from future 6G network

https://www.bloomberg.com/news/articles/2025-11-13/germany-to-ban-huawei-from-future-6g-network-i...
167•teleforce•6h ago•124 comments

AGI fantasy is a blocker to actual engineering

https://www.tomwphillips.co.uk/2025/11/agi-fantasy-is-a-blocker-to-actual-engineering/
513•tomwphillips•10h ago•516 comments

Meeting notes between Forgejo and the Dutch government via Git commits

https://codeberg.org/forgejo/sustainability/pulls/137/files
87•speckx•5h ago•35 comments

USDA head says 'everyone' on SNAP will now have to reapply

https://thehill.com/homenews/administration/5606715-agriculture-secretary-snap-reapply/
19•sipofwater•39m ago•17 comments
Open in hackernews

Has Google solved two of AI's oldest problems?

https://generativehistory.substack.com/p/has-google-quietly-solved-two-of
73•scrlk•3d ago

Comments

throwup238•1h ago
I really hope they have because I’ve also been experimenting with LLMs to automate searching through old archival handwritten documents. I’m interested in the Conquistadors and their extensive accounts of their expeditions, but holy cow reading 16th century handwritten Spanish and translating it at the same time is a nightmare, requiring a ton of expertise and inside field knowledge. It doesn’t help that they were often written in the field by semi-literate people who misused lots of words. Even the simplest accounts require quite a lot of detective work to decipher with subtle signals like that pound sign for the sugar loaf.

> Whatever it is, users have reported some truly wild things: it codes fully functioning Windows and Apple OS clones, 3D design software, Nintendo emulators, and productivity suites from single prompts.

This I’m a lot more skeptical of. The linked twitter post just looks like something it would replicate via HTML/CSS/JS. Whats the kernel look like?

WhyOhWhyQ•1h ago
"> Whatever it is, users have reported some truly wild things: it codes fully functioning Windows and Apple OS clones, 3D design software, Nintendo emulators, and productivity suites from single prompts."

Wow I'm doing it way wrong. How do I get the good stuff?

zer00eyz•55m ago
Your not.

I want you to go into the kitchen and bake a cake. Please replace all the flour with baking soda. If it comes out looking limp and lifeless just decorate it up with extra layers of frosting.

You can make something that looks like a cake but would not be good to eat.

The cake, sometimes, is a lie. And in this case, so are likely most of these results... or they are the actual source code of some other project just regurgitated.

hinkley•33m ago
We got the results back. You are a horrible person. I’m serious, that’s what it says: “Horrible person.”

We weren’t even testing for that.

erulabs•24m ago
Well, what does a neck-bearded old engineer know about fashion? He probably - Oh, wait. It's a she. Still, what does she know? Oh wait, it says she has a medical degree. In fashion! From France!
joshstrange•22m ago
If you want to listen to the line from Portal 2 it's on this page (second line in the section linked): https://theportalwiki.com/wiki/GLaDOS_voice_lines_(Portal_2)...
joshstrange•23m ago
Source: Portal 2, you can see the line and listen to it here (last one in section): https://theportalwiki.com/wiki/GLaDOS_voice_lines_(Portal_2)...
hinkley•4m ago
I figured it was appropriate given the context.

I’m still amazed that game started as someone’s school project. Long live the Orange Box!

snickerbockers•59m ago
I'm skeptical that they're actually capable of making something novel. There are thousands of hobby operating systems and video game emulators on github for it to train off of so it's not particularly surprising that it can copy somebody else's homework.
flatline•30m ago
I believe they can create a novel instance of a system from a sufficient number of relevant references - i.e. implement a set of already-known features without (much) code duplication. LLMs are certainly capable of this level of generalization due to their huge non-relevant reference set. Whether they can expand beyond that into something truly novel from a feature/functionality standpoint is a whole other, and less well-defined, question. I tend to agree that they are closed systems relative to their corpus. But then, aren't we? I feel like the aperture for true novelty to enter is vanishingly small, and cultures put a premium on it vis-a-vis the arts, technological innovation, etc. Almost every human endeavor is just copying and iterating on prior examples.
nestorD•56m ago
Oh! That's a nice use-case and not too far from stuff I have been playing with! (happily I do not have to deal with handwriting, just bad scans of older newspapers and texts)

I can vouch for the fact that LLMs are great at searching in the original language, summarizing key points to let you know whether a document might be of interest, then providing you with a translation where you need one.

The fun part has been build tools to turn Claude code and Codex CLI into capable research assistant for that type of projects.

throwup238•2m ago
> The fun part has been build tools to turn Claude code and Codex CLI into capable research assistant for that type of projects.

What does that look like? How well does it work?

I ended up writing a research TUI with my own higher level orchestration (basically have the thing keep working in a loop until a budget has been reached) and document extraction.

jvreeland•51m ago
I'd love to find more info on this but from what I can find it seems to be making webpages that look like those product, and seemingly can "run python" or "emulate a game" but writing something that, based on all of GitHub, can approximate an iPhone or emulator in javscript/css/HTML is very very very different than writing an OS.
netsharc•55m ago
Author says "It is the most amazing thing I have seen an LLM do, and it was unprompted, entirely accidental." and then jumps back to the "beginning of the story". Including talking about a trip to Canada.

Skip to the section headed "The Ultimate Test" for the resolution of the clickbait of "the most amazing thing...". (According to him, it correctly interpreted a line in an 18th century merchant ledger using maths and logic)

appreciatorBus•12m ago
The new model may or may not be great at handwriting but I found the author's constant repetition about how amazing it was irritating enough to stop reading and to wonder if the article itself was slop-written.

"users have reported some truly wild things" "the results were shocking" "the most amazing thing I have seen an LLM do" "exciting and frightening all at once" "the most astounding result I have ever seen" "made the hair stand up on the back of my neck"

bgwalter•49m ago
No, just another academic with the ominous handle @generativehistory that is beguiled by "AI". It is strange that others can never reproduce such amazing feats.
pksebben•27m ago
I don't know if I'd call it an 'amazing feat', but claude had me pause for a moment recently.

Some time ago, I'd been working on a framework that involved a series of servers (not the only one I've talked to claude about) that had to pass messages around in a particular fashion. Mostly technical implementation details and occasional questions about architecture.

Fast forward a ways, and on a lark I decided to ask in the abstract about the best way to structure such an interaction. Mark that this was not in the same chat or project and didn't have any identifying information about the original, save for the structure of the abstraction (in this case, a message bus server and some translation and processing services, all accessed via client.)

so:

- we were far enough removed that the whole conversation pertaining to the original was for sure not in the context window

- we only referred to the abstraction (with like a A=>B=>C=>B=>A kind of notation and a very brief question)

- most of the work on the original was in claude code

and it knew. In the answer it gave, it mentioned the project by name. I can think of only two ways this could have happened:

- they are doing some real fancy tricks to cram your entire corpus of chat history into the current context somehow

- the model has access to some kind of fact database where it was keeping an effective enough abstraction to make the connection

I find either one mindblowing for different reasons.

zahlman•7m ago
Are you sure it isn't just a case of a write-up of the project appearing in the training data?
pavlov•37m ago
I’ve seen those A/B choices on Google AI Studio recently, and there wasn’t a substantial difference between the outputs. It felt more like a different random seed for the same model.

Of course it’s very possible my use case wasn’t terribly interesting so it wouldn’t reveal model differences, or that it was a different A/B test.

jeffbee•22m ago
For me they've been very similar, except in one case where I corrected it and on one side it doubled down on being objectively wrong, and on the other side it took my feedback and started over with a new line of thinking.
thatoneengineer•36m ago
https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headline...
efitz•33m ago
I haven’t seen this new google model but now must try it out.

I will say that other frontier models are starting to surprise me with their reasoning/understanding- I really have a hard time making (or believing) the argument that they are just predicting the next word.

I’ve been using Claude Code heavily since April; Sonnet 4.5 frequently surprises me.

Two days ago I told the AI to read all the documentation from my 5 projects related to a tool I’m building, and create a wiki, focused on audience and task.

I'm hand reviewing the 50 wiki pages it created, but overall it did a great job.

I got frustrated about one issue: I have a github issue to create a way to integrate with issue trackers (like Jira), but it's TODO, and the AI featured on the home page that we had issue tracker integration. It created a page for it and everything; I figured it was hallucinating.

I went to edit the page and replace it with placeholder text and was shocked that the LLM had (unprompted) figured out how to use existing features to integrate with issue trackers, and wrote sample code for GitHub, Jira and Slack (notifications). That truly surprised me.

energy123•5m ago
Predicting the next word requires understanding, they're not separate things. If you don't know what comes after the next word, then you don't know what the next word should be. So the task implicitly forces a more long-horizon understanding of the future sequence.
conception•31m ago
I will note that 2.5 pro preview… march? Was maybe the best model I’ve used yet. The actual release model was… less. I suspect Google found the preview too expensive and optimized it down but it was interesting to see there was some hidden horsepower there. Google has always been poised to be the AI leader/winner - excited to see if this is fluff or the real deal or another preview that gets nerfed.
Legend2440•17m ago
What an unnecessarily wordy article. It could have been a fifth of the length. The actual point is buried under pages and pages of fluff and hyperbole.
johnwheeler•15m ago
Yes, and I agree and it seems like the author has a naïve experience with LLMs because what he’s talking about is kind of the bread and butter as far as I’m concerned
asimilator•14m ago
Summarize it with an LLM.
observationist•9m ago
This might just be a handcrafted prompt framework for handwriting recognition tied in with reasoning - do a rough pass, make assumptions and predictions, check assumptions and predictions, if they pass, use the degree of confidence in their passage to inform what the other characters might be, and gradually flesh out an interpretation of what was intended to be communicated.

If they could get this to occur naturally - with no supporting prompts, and only one-shot or one-shot reasoning, then it could extend to complex composition generally, which would be cool.

lproven•3m ago
Betteridge's law surely applies.