frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Transcribe your aunts post cards with Gemini 3 Pro

https://leserli.ch/ocr/
1•nielstron•2m ago•0 comments

.72% Variance Lance

1•mav5431•3m ago•0 comments

ReKindle – web-based operating system designed specifically for E-ink devices

https://rekindle.ink
1•JSLegendDev•4m ago•0 comments

Encrypt It

https://encryptitalready.org/
1•u1hcw9nx•4m ago•0 comments

NextMatch – 5-minute video speed dating to reduce ghosting

https://nextmatchdating.netlify.app/
1•Halinani8•5m ago•1 comments

Personalizing esketamine treatment in TRD and TRBD

https://www.frontiersin.org/articles/10.3389/fpsyt.2025.1736114
1•PaulHoule•7m ago•0 comments

SpaceKit.xyz – a browser‑native VM for decentralized compute

https://spacekit.xyz
1•astorrivera•7m ago•1 comments

NotebookLM: The AI that only learns from you

https://byandrev.dev/en/blog/what-is-notebooklm
1•byandrev•8m ago•1 comments

Show HN: An open-source starter kit for developing with Postgres and ClickHouse

https://github.com/ClickHouse/postgres-clickhouse-stack
1•saisrirampur•8m ago•0 comments

Game Boy Advance d-pad capacitor measurements

https://gekkio.fi/blog/2026/game-boy-advance-d-pad-capacitor-measurements/
1•todsacerdoti•9m ago•0 comments

South Korean crypto firm accidentally sends $44B in bitcoins to users

https://www.reuters.com/world/asia-pacific/crypto-firm-accidentally-sends-44-billion-bitcoins-use...
2•layer8•9m ago•0 comments

Apache Poison Fountain

https://gist.github.com/jwakely/a511a5cab5eb36d088ecd1659fcee1d5
1•atomic128•11m ago•2 comments

Web.whatsapp.com appears to be having issues syncing and sending messages

http://web.whatsapp.com
1•sabujp•12m ago•2 comments

Google in Your Terminal

https://gogcli.sh/
1•johlo•13m ago•0 comments

Shannon: Claude Code for Pen Testing: #1 on Github today

https://github.com/KeygraphHQ/shannon
1•hendler•13m ago•0 comments

Anthropic: Latest Claude model finds more than 500 vulnerabilities

https://www.scworld.com/news/anthropic-latest-claude-model-finds-more-than-500-vulnerabilities
2•Bender•18m ago•0 comments

Brooklyn cemetery plans human composting option, stirring interest and debate

https://www.cbsnews.com/newyork/news/brooklyn-green-wood-cemetery-human-composting/
1•geox•18m ago•0 comments

Why the 'Strivers' Are Right

https://greyenlightenment.com/2026/02/03/the-strivers-were-right-all-along/
1•paulpauper•19m ago•0 comments

Brain Dumps as a Literary Form

https://davegriffith.substack.com/p/brain-dumps-as-a-literary-form
1•gmays•20m ago•0 comments

Agentic Coding and the Problem of Oracles

https://epkconsulting.substack.com/p/agentic-coding-and-the-problem-of
1•qingsworkshop•20m ago•0 comments

Malicious packages for dYdX cryptocurrency exchange empties user wallets

https://arstechnica.com/security/2026/02/malicious-packages-for-dydx-cryptocurrency-exchange-empt...
1•Bender•20m ago•0 comments

Show HN: I built a <400ms latency voice agent that runs on a 4gb vram GTX 1650"

https://github.com/pheonix-delta/axiom-voice-agent
1•shubham-coder•21m ago•0 comments

Penisgate erupts at Olympics; scandal exposes risks of bulking your bulge

https://arstechnica.com/health/2026/02/penisgate-erupts-at-olympics-scandal-exposes-risks-of-bulk...
4•Bender•22m ago•0 comments

Arcan Explained: A browser for different webs

https://arcan-fe.com/2026/01/26/arcan-explained-a-browser-for-different-webs/
1•fanf2•23m ago•0 comments

What did we learn from the AI Village in 2025?

https://theaidigest.org/village/blog/what-we-learned-2025
1•mrkO99•24m ago•0 comments

An open replacement for the IBM 3174 Establishment Controller

https://github.com/lowobservable/oec
1•bri3d•26m ago•0 comments

The P in PGP isn't for pain: encrypting emails in the browser

https://ckardaris.github.io/blog/2026/02/07/encrypted-email.html
2•ckardaris•28m ago•0 comments

Show HN: Mirror Parliament where users vote on top of politicians and draft laws

https://github.com/fokdelafons/lustra
1•fokdelafons•29m ago•1 comments

Ask HN: Opus 4.6 ignoring instructions, how to use 4.5 in Claude Code instead?

1•Chance-Device•30m ago•0 comments

We Mourn Our Craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
2•ColinWright•33m ago•0 comments
Open in hackernews

Yann LeCun on why LLMs and AGI are "total BS" [video]

https://www.youtube.com/watch?v=7u-DXVADyhc
2•Der_Einzige•1mo ago

Comments

jascha_eng•1mo ago
I don't have 2 hours but I asked Gemini for a summary because it has a good YouTube integration some interesting points imo but not sure I wanna watch the whole thing:

> This video features an in-depth interview with Yann LeCun, Chief AI Scientist at Meta and a Turing Award winner, hosted on The Information Bottleneck podcast. LeCun discusses his new startup, the limitations of current Large Language Models (LLMs), his vision for "World Models," and his optimistic outlook on AI safety.

Executive Summary Yann LeCun argues that the current industry focus on scaling LLMs is a dead end for achieving human-level intelligence. He believes the future lies in World Models—systems that can understand the physical world, plan, and reason using abstract representations rather than just predicting the next token. To pursue this, he is launching a new company, Advanced Machine Intelligence (AMI), which will focus on research and productizing these architectures.

Key Insights from Yann LeCun 1. The "LLM Pill" & The Limits of Generative AI LeCun is highly critical of the Silicon Valley consensus that simply scaling up LLMs and adding more data will lead to Artificial General Intelligence (AGI).

The "LLM Pill": He disparages the idea that you can reach superintelligence just by scaling LLMs, calling it "complete bullshit" [01:13:02].

Data Inefficiency: LLMs require trillions of tokens to learn what a 4-year-old learns from just living. He notes that a child sees about 16,000 hours of visual data in four years, which contains far more information than all the text on the internet [25:23].

Lack of Grounding: LLMs do not understand the physical world (e.g., object permanence, gravity) and only "regurgitate" answers based on fine-tuning rather than genuine understanding [36:22].

2. The Solution: World Models & JEPA LeCun advocates for Joint Embedding Predictive Architectures (JEPA).

Prediction in Abstract Space: Unlike video generation models (like Sora) that try to predict every pixel (which is inefficient and hallucinatory), a World Model should predict in an abstract representation space. It filters out irrelevant details (noise) and focuses on what matters [15:35].

The Sailing Analogy: He compares sailing to running a world model. You don't simulate every water molecule (Computational Fluid Dynamics); you use an intuitive, abstract physics model to predict how the wind and waves will affect the boat [01:30:29].

Planning vs. Autocomplete: True intelligence requires planning—predicting the consequences of a sequence of actions to optimize an objective. LLMs just autocomplete text [07:26].

3. A New Startup: Advanced Machine Intelligence (AMI) LeCun is starting AMI to focus on these "World Models" and planning systems.

Open Research: He insists that upstream research must be published openly to be reliable. Closed research leads to "delusion" about one's own progress [04:59].

Goal: To become a supplier of intelligent systems that can reason and plan, moving beyond the capabilities of current chatbots.

4. AI Safety is an Engineering Problem LeCun dismisses "doomer" narratives about AI taking over the world, viewing safety as a solvable engineering challenge akin to building reliable jet engines.

Objective-Driven Safety: He proposes "Objective-Driven AI". Instead of trying to fine-tune an LLM (which can be jailbroken), you build a system that generates actions by solving an optimization problem. Safety constraints (e.g., "don't hurt humans") are hard-coded into the objective function, making the system intrinsically safe by construction [01:02:04].

The Jet Engine Analogy: Early jet engines were dangerous and unreliable, but through engineering, they became the safest mode of transport. AI will follow the same trajectory [58:25].

Dominance vs. Intelligence: He argues that the desire to dominate is a biological trait tied to social species, not a necessary byproduct of intelligence. A machine can be super-intelligent without having the drive to rule humanity [01:35:13].

5. Advice for Students Don't Just Study CS: LeCun advises students to focus on subjects with a "long shelf life" like mathematics, physics, and engineering (control theory, signal processing).

Avoid Trends: Computer Science trends change too rapidly. Foundational knowledge in how to model reality (physics/math) is more valuable for future AI research than learning the specific coding framework of the month [01:36:20].

6. AGI Timelines He rejects the term "AGI" because human intelligence is specialized, not general.

Prediction: Optimistically, we might have systems with "cat-level" or "dog-level" intelligence in 5–10 years. Reaching human level might take 20+ years if unforeseen obstacles arise [51:24].