frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Shipping WebGPU on Windows in Firefox 141

https://mozillagfx.wordpress.com/2025/07/15/shipping-webgpu-on-windows-in-firefox-141/
73•Bogdanp•3h ago•16 comments

Cloudflare 1.1.1.1 Incident on July 14, 2025

https://blog.cloudflare.com/cloudflare-1-1-1-1-incident-on-july-14-2025/
228•nomaxx117•5h ago•116 comments

Tilck: A Tiny Linux-Compatible Kernel

https://github.com/vvaltchev/tilck
122•chubot•5h ago•23 comments

GPUHammer: Rowhammer attacks on GPU memories are practical

https://gpuhammer.com/
174•jonbaer•9h ago•53 comments

Six Years of Gemini

https://geminiprotocol.net/news/2025_06_20.gmi
129•brson•6h ago•40 comments

LLM Daydreaming

https://gwern.net/ai-daydreaming
57•nanfinitum•7h ago•18 comments

Ukrainian hackers destroyed the IT infrastructure of Russian drone manufacturer

https://prm.ua/en/ukrainian-hackers-destroyed-the-it-infrastructure-of-a-russian-drone-manufacturer-what-is-known/
67•doener•1h ago•17 comments

Documenting what you're willing to support (and not)

https://rachelbythebay.com/w/2025/07/07/support/
22•zdw•3d ago•1 comments

Show HN: Shoggoth Mini – A soft tentacle robot powered by GPT-4o and RL

https://www.matthieulc.com/posts/shoggoth-mini
476•cataPhil•17h ago•91 comments

Reflections on OpenAI

https://calv.info/openai-reflections
531•calvinfo•16h ago•300 comments

NIST ion clock sets new record for most accurate clock

https://www.nist.gov/news-events/news/2025/07/nist-ion-clock-sets-new-record-most-accurate-clock-world
296•voxadam•17h ago•104 comments

Hijacking Trust? Bitvise Under Fire for Controlling Domain of FOSS Project PuTTY

https://blog.pupred.com/blog/puttyvsbitvise/
36•ColinWright•3h ago•27 comments

Where's Firefox going next?

https://connect.mozilla.org/t5/discussions/where-s-firefox-going-next-you-tell-us/m-p/100698#M39094
200•ReadCarlBarks•12h ago•274 comments

I'm Switching to Python and Actually Liking It

https://www.cesarsotovalero.net/blog/i-am-switching-to-python-and-actually-liking-it.html
42•cesarsotovalero•1h ago•54 comments

Running a million-board chess MMO in a single process

https://eieio.games/blog/a-million-realtime-chess-boards-in-a-single-process/
114•isaiahwp•3d ago•15 comments

To be a better programmer, write little proofs in your head

https://the-nerve-blog.ghost.io/to-be-a-better-programmer-write-little-proofs-in-your-head/
334•mprast•16h ago•133 comments

The FIPS 140-3 Go Cryptographic Module

https://go.dev/blog/fips140
146•FiloSottile•12h ago•49 comments

My Family and the Flood

https://www.texasmonthly.com/news-politics/texas-flood-firsthand-account/
162•herbertl•11h ago•57 comments

Algorithms for making interesting organic simulations

https://bleuje.com/physarum-explanation/
62•todsacerdoti•2d ago•6 comments

The beauty entrepreneur who made the Jheri curl a sensation

https://thehustle.co/originals/the-beauty-entrepreneur-who-made-the-jheri-curl-a-sensation
4•Anon84•2d ago•0 comments

Congress moves to reject bulk of White House's proposed NASA cuts

https://arstechnica.com/space/2025/07/congress-moves-to-reject-bulk-of-white-houses-proposed-nasa-cuts/
151•DocFeind•6h ago•89 comments

The Story of Mel, A Real Programmer, Annotated (1996)

https://users.cs.utah.edu/~elb/folklore/mel-annotated/node1.html#SECTION00010000000000000000
102•fanf2•3d ago•31 comments

Show HN: Reviving a 20 year old OS X App

https://andrewshaw.nl/blog/reviving-genius
49•shawa_a_a•3d ago•25 comments

Nextflow: System for creating scalable, portable, reproducible workflows

https://github.com/nextflow-io/nextflow
13•saikatsg•4h ago•1 comments

Mostly dead influential programming languages (2020)

https://www.hillelwayne.com/post/influential-dead-languages/
157•azhenley•3d ago•99 comments

Plasma Bigscreen rises from the dead with a better UI

https://www.neowin.net/news/kdes-android-tv-alternative-plasma-bigscreen-rises-from-the-dead-with-a-better-ui/
154•bundie•16h ago•58 comments

Designing for the Eye: Optical corrections in architecture and typography

https://www.nubero.ch/blog/015/
161•ArmageddonIt•15h ago•24 comments

Mira Murati’s AI startup Thinking Machines valued at $12B in early-stage funding

https://www.reuters.com/technology/mira-muratis-ai-startup-thinking-machines-raises-2-billion-a16z-led-round-2025-07-15/
110•spenvo•16h ago•126 comments

Lorem Gibson

http://loremgibson.com/
137•DyslexicAtheist•3d ago•30 comments

LLM Inevitabilism

https://tomrenner.com/posts/llm-inevitabilism/
1582•SwoopsFromAbove•1d ago•1490 comments
Open in hackernews

o3 and Grok 4 accidentally vindicate neurosymbolic AI

https://garymarcus.substack.com/p/how-o3-and-grok-4-accidentally-vindicated
52•NotInOurNames•2d ago

Comments

YuriNiyazov•2d ago
motte, meet bailey. Gary Marcus' shtick the entire time has been "LLMs are the wrong approach", and now the claim is "actually, the entire time I've been claiming something much weaker: LLMs that call out to code interpreters are sufficient for neurosymbolic AI"/
hooah•2d ago
He’s been saying that LLM isn’t a “universal solvent”, not as a “recent claim”.

''' In my 2018 Deep Learning: A Critical Appraisal for example, I wrote

Despite all of the problems I have sketched, I don’t think that we need to abandon deep learning.

Rather, we need to reconceptualize it: not as a universal solvent, but simply as one tool among many, a power screwdriver in a world in which we also need hammers, wrenches, and pliers, not to mentions chisels and drills, voltmeters, logic probes, and oscilloscopes. '''

YuriNiyazov•1d ago
Thanks for your reply; I can’t edit the original comment but I have updated my personal understanding of Marcus’ position.
kgwgk•2d ago
2001: Resisting the conventional wisdom that says that if the mind is a large neural network it cannot simultaneously be a manipulator of symbols, Marcus outlines a variety of ways in which neural systems could be organized so as to manipulate symbols, and he shows why such systems are more likely to provide an adequate substrate for language and cognition than neural systems that are inconsistent with the manipulation of symbols.

2018: While none of this work has yet fully scaled towards anything like full-service artificial general intelligence, I have long argued (Marcus, 2001) that more on integrating microprocessor-like operations into neural networks could be extremely valuable.

2022: Where people like me have championed “hybrid models” that incorporate elements of both deep learning and symbol-manipulation, Hinton and his followers have pushed over and over to kick symbols to the curb.

YuriNiyazov•1d ago
Thanks for your reply; I can’t edit the original comment but I have updated my personal understanding of Marcus’ position.
qwertylicious•1d ago
It says a lot about the current discourse around AI that 6 years ago Marcus would write:

> Despite all of the problems I have sketched, I don’t think that we need to abandon deep learning.

And that would somehow be spun, today, as "LLMs are the wrong approach".

Meanwhile, another attempt to post this article here got straight up flagged, I can only assume because this whole topic has become about religious orthodoxy vs the heretics.

YuriNiyazov•1d ago
Thanks for your reply; I can’t edit the original comment but I have updated my personal understanding of Marcus’ position.
4b11b4•4h ago
have also updated
mindcrime•2d ago
I mostly agree with Gary on the core premise of this post, which I interpret generally as "it would be a good idea to pursue neuro-symbolic AI, not just deep learning."

A couple of additional thoughts:

1. She goes on to point out that the field has become an intellectual monoculture, with the neurosymbolic approach largely abandoned, and massive funding going to the pure connectionist (neural network) approach

Just to nitpick... that is largely true, but with the caveat that there has been something of a resurgence of interest in neuro-symbolic AI over just the last couple of years. There's been a series of "Neuro-Symbolic AI Summer School" events[1][2][3] going on since 2022 with the next one coming up in August. And there have been recent books[4][5] published specifically on neuro-symbolic AI. You'll also find recent papers on neuro-symbolic AI on arXiv[6]. So for those who are interested in this topic, there is definitely activity underway "out there".

2. Including LLMs somewhere in the next evolution of AI makes sense to me, but leaving them at the core may be a mistake.

I've spent a lot of time thinking about this, and generally agree with this sentiment. Some kind of fusion of LLM's (or "connectionism" in general) and symbolic processing seems desirable, but I'm not sure that we should rely on LLM's to be "core" and try to just layer symbolic processing on top of what we get from the LLM. I have my own thoughts on how such an integration might work, but it's all still speculative at the moment. But I find the whole notion worthy enough to invest time and attention into it, for whatever that is worth.

[1]: https://ibm.github.io/neuro-symbolic-ai/events/ns-summerscho...

[2]: https://neurosymbolic.github.io/nsss2023/

[3]: https://neurosymbolic.github.io/nsss2024/

[4]: https://www.amazon.com/Neuro-Symbolic-AI-transparent-trustwo...

[5]: https://www.iospress.com/catalog/books/handbook-on-neurosymb...

[6]: https://arxiv.org/pdf/2501.05435

ACCount36•11h ago
It's a very funny read.

"See, LLMs that are allowed to use Python perform better than ones that aren't, and Python is symbolic, so I was right all along!"

Looks like a surrender to me.

xrd•11h ago
Surrender by whom?

Isn't his argument that leaders in the AI/ML space have consistently dismissed the need for that the entire point of the article? And that seems like a valid question to be after reading it.

And huge financial implications for the industry.

ACCount36•11h ago
By Gary Marcus, of course.

If he claims that giving LLMs a Python interpreter is a huge win for his paradigm, then major AI companies have been "winning" since 2022.

throw310822•10h ago
Jobs described computers as bicycles for the mind.

Turns out that LLMs find bicycles useful too.

killerstorm•52m ago
That's a rather silly strawman.

OpenAI demonstrated Codex - a version of GPT-3 which could write Python and JS code - in 2021, only a year after the first GPT-3 release.

Here's a demo of Codex doing MS Word tasks using Python code: https://www.youtube.com/watch?v=-Dpl2awseZU

Live coding demo: https://www.youtube.com/watch?v=SGUCcjHTmGY

There's also a demo of doing some data processing using Codex.

The idea of using LLM to write code is rater obvious, and many people talked about it around GPT-3 or even GPT-2 release. People know that many logic tasks require search and it's rather silly to use LLM to do that. If it can generate any text it can generate Prolog code or a specification for SAT solver, and then dedicated efficient tool can handle the computation.

Gary Marcus is so detached from actual research that he might have genuinely missed the 2021 version of Codex. So he might truly believe that they added Python interpreter "quietly", and it's not the big announcement of 2021.

This guy is known as a clown in the industry, and for a very good reason... If he was an actual researcher he would have jumped on the opportunity to make an actual neurosymbolic demos when first language models were released - you don't actually need to wait GPT-3 to do that. But he preferred to relish in writing books claiming that the entire industry is wrong instead of actually building something.

rowanG077•8h ago
I'm not really a follower of low level implementation of AIs. Why would python tool use(among others) not qualify as symbolic? I don't think it's under question that tool use vastly improves LLMs.
brcmthrowaway•11h ago
I guess it is over for him

Filip Pieknewski next.

atleastoptimal•11h ago
He adopts a conspiratorial lens, (or at least implies it), that neurosymbolic AI was "kept down" over the last 4 decades, which is a very funny reframing of the fact that it simply never was useful enough to lift itself off the ground by the virtue of its own merits in the first place. If a ground-up neurosymbolic approached had shown promise in getting an AI system to the general level of intelligence LLM's have reached, it would have been adopted and scaled up. The money, research and effort went to what was useful, and transformers won out by virtue of their undeniable utility.
nilkn•10h ago
As far as I can tell, putting the conspiratorial thinking aside, he's not really wrong, but I'm also not sure it matters that much.

If neurosymbolic AI was "sidelined" in favor of "connectionist" pure NN scaling, I don't think it was part of a conspiracy or deeply embedded ideological bias. I mean, maybe that's the case, but it seems far more likely to me that pure deep learning scaling just provided a more incremental and accessible on-ramp to building real-world systems that are genuinely useful for hundreds of millions of users. If anything, I think the lesson here was to spend less time theory-crafting and more time building. In this case, it looks like it was the builders who got to the endpoint that was only imagined by the theory-crafters, and that's what matters at the end of the day.

4b11b4•4h ago
That resonates. There _are_ a lot of good approximation functions can be developed from deep learning and good data and now RL on top. But then, we really do need symbolism, and now we need to somehow combine them. And it'll be different for text vs vision... Lots of ...s ahead
4b11b4•4h ago
But how to even combine them. Is it only via another AGENT who has a symbolism tool. and if that (group of agents) cant extract multiple symbolisms from the context, of one which best fits, from the current approximation (context) then..
4b11b4•4h ago
This time, it really does make sense