frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Maybe writing speed is a bottleneck for programming

https://buttondown.com/hillelwayne/archive/maybe-writing-speed-actually-is-a-bottleneck-for/
1•goranmoomin•34s ago•0 comments

GPT-4.1 Beast Prompt

https://github.com/sst/opencode/pull/778/files
1•tosh•48s ago•0 comments

China's humanoid robots generate more soccer excitement than human counterparts

https://techxplore.com/news/2025-06-china-humanoid-robots-generate-soccer.html
1•PaulHoule•50s ago•0 comments

Netflix uses AI effects for first time to cut costs

https://www.bbc.com/news/articles/c9vr4rymlw9o
1•fidotron•55s ago•0 comments

A circle and a hyperbola living in one plot

https://tobylam.xyz/2024/05/24/circle-hyperbola
1•tobytylam•1m ago•0 comments

Lessons from a Chimp: AI "Scheming" and the Quest for Ape Language

https://arxiv.org/abs/2507.03409
1•surprisetalk•3m ago•0 comments

The Role of Blood Plasma Donation Centers in Crime Reduction

https://marginalrevolution.com/marginalrevolution/2025/07/the-role-of-blood-plasma-donation-centers-in-crime-reduction.html
1•surprisetalk•4m ago•0 comments

Structuring Arrays with Algebraic Shapes [video]

https://www.youtube.com/watch?v=3Lbs0pJ_OHI
1•surprisetalk•4m ago•0 comments

The Krull dimension of the semiring of natural numbers is equal to 2

https://freedommathdance.blogspot.com/2025/07/the-krull-dimension-of-natural-numbers.html
1•surprisetalk•4m ago•0 comments

I analyzed ChatGPT with Chrome devtools to uncover its web search query patterns

https://acme.bot/blog/how-chatgpt-decides-when-to-search-the-web-a-data-driven-investigation/
2•abhishake85•5m ago•1 comments

Can finance put a stop to AI data mining?

https://www.ft.com/content/ab9ff8c1-9344-4192-909e-b04a23e6024e
1•hhs•7m ago•0 comments

Coordination and Collaborative Reasoning in Multi-Agent LLMs

https://arxiv.org/abs/2507.08616
1•nkko•7m ago•0 comments

Fascism for First Time Founders

https://www.techdirt.com/2025/07/17/fascism-for-first-time-founders/
3•danorama•8m ago•0 comments

Robots in China are riding the subway to make 7-Eleven deliveries

https://www.popsci.com/technology/robots-in-china-subway-7-eleven-deliveries/
3•bookofjoe•9m ago•0 comments

Pentagon's China-style rare earths deal triggers industry backlash

https://www.ft.com/content/0b7f002d-16ca-4a2e-be69-ba2c05e853d3
2•ironyman•9m ago•1 comments

The Power and Potential of Zero-Knowledge Proofs

https://cacm.acm.org/news/the-power-and-potential-of-zero-knowledge-proofs/
1•pseudolus•10m ago•0 comments

Russian infostealer sends commands to public LLM to craft requests on the fly

https://www.bleepingcomputer.com/news/security/lamehug-malware-uses-ai-llm-to-craft-windows-data-theft-commands-in-real-time/
2•pogue•11m ago•1 comments

lsr: ls with io_uring

https://tangled.sh/@rockorager.dev/lsr
2•mpweiher•12m ago•0 comments

Topology Meets Machine Learning

https://www.ams.org/journals/notices/202507/noti3193/noti3193.html?adat=August%202025&trk=3193&pdfissue=202507&pdffile=rnoti-p719.pdf&cat=none&type=.html
1•Pseudomanifold•12m ago•0 comments

Surprising finding could pave way for universal cancer vaccine

https://medicalxpress.com/news/2025-07-pave-universal-cancer-vaccine.html
1•pseudolus•13m ago•0 comments

Go at American Express Today: Seven Key Learnings

https://www.americanexpress.io/go-at-american-express-today/
1•amex_tech•13m ago•0 comments

NVIDIAScape: A three-line container escape exploit affecting all GPU runtimes

https://www.wiz.io/blog/nvidia-ai-vulnerability-cve-2025-23266-nvidiascape
1•nirohf•15m ago•0 comments

Python Audio Processing with Pedalboard

https://lwn.net/Articles/1027814/
1•sohkamyung•16m ago•0 comments

Cleaning up 5 years of tech debt in a full-stack JavaScript framework

https://wasp.sh/blog/2025/07/18/faster-wasp-dev
1•cprecioso•16m ago•0 comments

Ask HN: State of accessibility software based on combuter vision

1•yehoshuapw•16m ago•0 comments

Making Earth Habitable – Jackson Schultz and Jordan McMillan, Rainmaker

https://www.youtube.com/watch?v=jYViZDHNN-8
1•RealityVoid•16m ago•1 comments

Pandas AI

https://pandas-ai.com/
1•skanderbm•17m ago•0 comments

SalesMan– Your AI sales coach

https://salesman-ai.com/
1•mieszek•17m ago•1 comments

The Far Right Contagion – It's not a Trump thing. It's not a politics thing

https://chadbourn.substack.com/p/the-far-right-contagion
2•MarcusE1W•18m ago•0 comments

Strict-validate-path-type does not allow period/dot/. in Exact or Prefix path

https://github.com/kubernetes/ingress-nginx/issues/11176
1•eadmund•22m ago•0 comments
Open in hackernews

Ask HN: How to Argue Against AI Enthusiasts?

9•Vektorceraptor•3h ago
I keep encountering the same attitude whenever I critically discuss AI: there’s a kind of fanatical optimism or hype, a “gold rush” mentality, that feels strange, yet can’t easily be refuted, because its proponents always retreat to the same position: “We don’t know. Time will tell.” But this doesn’t actually tell us what time will reveal, or whether it will be good or bad. So these considerations can't simply be dismissed.

Still, this argument seems irrefutable and at the same time, when taken seriously, it exerts enormous pressure. It represents a kind of evolutionary logic that can override everything else.

That’s why I wanted to ask: are there people or works that realistically and pragmatically outline the limits of AI,something that can serve as a solid counterpoint to blind optimism? Something that can't be overwritten so easily? I don’t mean the “grand limit cases” like quantum randomness, Gödel’s incompleteness theorems, or similar topics, but rather something much closer to the actual technology: inconsistencies or paradoxes that directly affect neural networks and limit them.

Furthermore, a fundamental guiding strategy or maxim seems to be: “What is the next logical step?” This also makes criticism difficult, because it simulates a kind of logical compulsion, one that elevates a person above other doubts and relieves him of them. This quickly turns into: “As long as we are following pure logic, we don’t need to worry about anything else.”

I’m also looking for counterarguments to this maxim, from logic, philosophy, and sociology.

Thank you

Comments

vouaobrasil•3h ago
As a staunch AI critic, my opinion is that if you're on the fence about it, and just want to bring some realism into the discussion, you can't if you're talking about the future. Because eventually AI might indeed gain quite a lot of the hyped features that some are talking about. You can look in the academic papers on Google Scholar to find some cautionary tales, but in general with academia, most researchers want to get on the hype train to inflate their CVs.

The only way to truly be critical of AI is to be against it for other reasons, such as its damaging effects on society and its ability to aggregate wealth to the top without much serious life improvement for the average person. I think AI is really one of those things that you're either for or against and there's no middle ground.

kevinh456•2h ago
You anchor on what "AI" actually is. "Artificial Intelligence" is, IMO, useless phrase. Make them clearly define what they're talking about.

AI is a bunch of different technologies that have many uses—neural networks, natural language processing, OCR, speech recognition, machine learning, computer vision, image classification, upscaling models, and our favorite new friends "generative pre-trained transformers" (GPT) and "large language models" (LLM) that make up key parts of "generative AI."

Once you make them specify what they're talking about, you talk about the nature and inherent limitations in the technology.

I like to call GPT and LLM "statistical binary string predictors." IE: given a string of binary, predict the expected binary string based on the inputs. It's an amazing technology, don't get me wrong. We're starting to see the limits already though.

Limited context windows. Larger context/training == lower quality results. More input tokens = lower quality. In some respects, newer models are now regressing from earlier models because they're chasing benchmarks and not the real world use cases.

Start to dive into the details. Ask them to admit the problems with LLM and GPT. Ask them how they see these problems getting resolved. Most AI fanboys don't understand the technologies involved. Expose it.