frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Lightweight and extensible compatibility layer between dataframe libraries

https://narwhals-dev.github.io/narwhals/
1•kermatt•1m ago•0 comments

Haskell for all: Beyond agentic coding

https://haskellforall.com/2026/02/beyond-agentic-coding
2•RebelPotato•4m ago•0 comments

Dorsey's Block cutting up to 10% of staff

https://www.reuters.com/business/dorseys-block-cutting-up-10-staff-bloomberg-news-reports-2026-02...
1•dev_tty01•7m ago•0 comments

Show HN: Freenet Lives – Real-Time Decentralized Apps at Scale [video]

https://www.youtube.com/watch?v=3SxNBz1VTE0
1•sanity•8m ago•1 comments

In the AI age, 'slow and steady' doesn't win

https://www.semafor.com/article/01/30/2026/in-the-ai-age-slow-and-steady-is-on-the-outs
1•mooreds•16m ago•1 comments

Administration won't let student deported to Honduras return

https://www.reuters.com/world/us/trump-administration-wont-let-student-deported-honduras-return-2...
1•petethomas•16m ago•0 comments

How were the NIST ECDSA curve parameters generated? (2023)

https://saweis.net/posts/nist-curve-seed-origins.html
1•mooreds•17m ago•0 comments

AI, networks and Mechanical Turks (2025)

https://www.ben-evans.com/benedictevans/2025/11/23/ai-networks-and-mechanical-turks
1•mooreds•17m ago•0 comments

Goto Considered Awesome [video]

https://www.youtube.com/watch?v=1UKVEUGEk6Y
1•linkdd•19m ago•0 comments

Show HN: I Built a Free AI LinkedIn Carousel Generator

https://carousel-ai.intellisell.ai/
1•troyethaniel•21m ago•0 comments

Implementing Auto Tiling with Just 5 Tiles

https://www.kyledunbar.dev/2026/02/05/Implementing-auto-tiling-with-just-5-tiles.html
1•todsacerdoti•22m ago•0 comments

Open Challange (Get all Universities involved

https://x.com/i/grok/share/3513b9001b8445e49e4795c93bcb1855
1•rwilliamspbgops•23m ago•0 comments

Apple Tried to Tamper Proof AirTag 2 Speakers – I Broke It [video]

https://www.youtube.com/watch?v=QLK6ixQpQsQ
2•gnabgib•25m ago•0 comments

Show HN: Isolating AI-generated code from human code | Vibe as a Code

https://www.npmjs.com/package/@gace/vaac
1•bstrama•26m ago•0 comments

Show HN: More beautiful and usable Hacker News

https://twitter.com/shivamhwp/status/2020125417995436090
3•shivamhwp•26m ago•0 comments

Toledo Derailment Rescue [video]

https://www.youtube.com/watch?v=wPHh5yHxkfU
1•samsolomon•28m ago•0 comments

War Department Cuts Ties with Harvard University

https://www.war.gov/News/News-Stories/Article/Article/4399812/war-department-cuts-ties-with-harva...
7•geox•32m ago•0 comments

Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory

https://github.com/localgpt-app/localgpt
1•yi_wang•33m ago•0 comments

A Bid-Based NFT Advertising Grid

https://bidsabillion.com/
1•chainbuilder•37m ago•1 comments

AI readability score for your documentation

https://docsalot.dev/tools/docsagent-score
1•fazkan•44m ago•0 comments

NASA Study: Non-Biologic Processes Don't Explain Mars Organics

https://science.nasa.gov/blogs/science-news/2026/02/06/nasa-study-non-biologic-processes-dont-ful...
2•bediger4000•47m ago•2 comments

I inhaled traffic fumes to find out where air pollution goes in my body

https://www.bbc.com/news/articles/c74w48d8epgo
2•dabinat•48m ago•0 comments

X said it would give $1M to a user who had previously shared racist posts

https://www.nbcnews.com/tech/internet/x-pays-1-million-prize-creator-history-racist-posts-rcna257768
6•doener•50m ago•1 comments

155M US land parcel boundaries

https://www.kaggle.com/datasets/landrecordsus/us-parcel-layer
2•tjwebbnorfolk•55m ago•0 comments

Private Inference

https://confer.to/blog/2026/01/private-inference/
2•jbegley•58m ago•1 comments

Font Rendering from First Principles

https://mccloskeybr.com/articles/font_rendering.html
1•krapp•1h ago•0 comments

Show HN: Seedance 2.0 AI video generator for creators and ecommerce

https://seedance-2.net
1•dallen97•1h ago•0 comments

Wally: A fun, reliable voice assistant in the shape of a penguin

https://github.com/JLW-7/Wally
2•PaulHoule•1h ago•0 comments

Rewriting Pycparser with the Help of an LLM

https://eli.thegreenplace.net/2026/rewriting-pycparser-with-the-help-of-an-llm/
2•y1n0•1h ago•0 comments

Lobsters Vibecoding Challenge

https://gist.github.com/MostAwesomeDude/bb8cbfd005a33f5dd262d1f20a63a693
2•tolerance•1h ago•0 comments
Open in hackernews

Yann LeCun on why LLMs and AGI are "total BS" [video]

https://www.youtube.com/watch?v=7u-DXVADyhc
2•Der_Einzige•1mo ago

Comments

jascha_eng•1mo ago
I don't have 2 hours but I asked Gemini for a summary because it has a good YouTube integration some interesting points imo but not sure I wanna watch the whole thing:

> This video features an in-depth interview with Yann LeCun, Chief AI Scientist at Meta and a Turing Award winner, hosted on The Information Bottleneck podcast. LeCun discusses his new startup, the limitations of current Large Language Models (LLMs), his vision for "World Models," and his optimistic outlook on AI safety.

Executive Summary Yann LeCun argues that the current industry focus on scaling LLMs is a dead end for achieving human-level intelligence. He believes the future lies in World Models—systems that can understand the physical world, plan, and reason using abstract representations rather than just predicting the next token. To pursue this, he is launching a new company, Advanced Machine Intelligence (AMI), which will focus on research and productizing these architectures.

Key Insights from Yann LeCun 1. The "LLM Pill" & The Limits of Generative AI LeCun is highly critical of the Silicon Valley consensus that simply scaling up LLMs and adding more data will lead to Artificial General Intelligence (AGI).

The "LLM Pill": He disparages the idea that you can reach superintelligence just by scaling LLMs, calling it "complete bullshit" [01:13:02].

Data Inefficiency: LLMs require trillions of tokens to learn what a 4-year-old learns from just living. He notes that a child sees about 16,000 hours of visual data in four years, which contains far more information than all the text on the internet [25:23].

Lack of Grounding: LLMs do not understand the physical world (e.g., object permanence, gravity) and only "regurgitate" answers based on fine-tuning rather than genuine understanding [36:22].

2. The Solution: World Models & JEPA LeCun advocates for Joint Embedding Predictive Architectures (JEPA).

Prediction in Abstract Space: Unlike video generation models (like Sora) that try to predict every pixel (which is inefficient and hallucinatory), a World Model should predict in an abstract representation space. It filters out irrelevant details (noise) and focuses on what matters [15:35].

The Sailing Analogy: He compares sailing to running a world model. You don't simulate every water molecule (Computational Fluid Dynamics); you use an intuitive, abstract physics model to predict how the wind and waves will affect the boat [01:30:29].

Planning vs. Autocomplete: True intelligence requires planning—predicting the consequences of a sequence of actions to optimize an objective. LLMs just autocomplete text [07:26].

3. A New Startup: Advanced Machine Intelligence (AMI) LeCun is starting AMI to focus on these "World Models" and planning systems.

Open Research: He insists that upstream research must be published openly to be reliable. Closed research leads to "delusion" about one's own progress [04:59].

Goal: To become a supplier of intelligent systems that can reason and plan, moving beyond the capabilities of current chatbots.

4. AI Safety is an Engineering Problem LeCun dismisses "doomer" narratives about AI taking over the world, viewing safety as a solvable engineering challenge akin to building reliable jet engines.

Objective-Driven Safety: He proposes "Objective-Driven AI". Instead of trying to fine-tune an LLM (which can be jailbroken), you build a system that generates actions by solving an optimization problem. Safety constraints (e.g., "don't hurt humans") are hard-coded into the objective function, making the system intrinsically safe by construction [01:02:04].

The Jet Engine Analogy: Early jet engines were dangerous and unreliable, but through engineering, they became the safest mode of transport. AI will follow the same trajectory [58:25].

Dominance vs. Intelligence: He argues that the desire to dominate is a biological trait tied to social species, not a necessary byproduct of intelligence. A machine can be super-intelligent without having the drive to rule humanity [01:35:13].

5. Advice for Students Don't Just Study CS: LeCun advises students to focus on subjects with a "long shelf life" like mathematics, physics, and engineering (control theory, signal processing).

Avoid Trends: Computer Science trends change too rapidly. Foundational knowledge in how to model reality (physics/math) is more valuable for future AI research than learning the specific coding framework of the month [01:36:20].

6. AGI Timelines He rejects the term "AGI" because human intelligence is specialized, not general.

Prediction: Optimistically, we might have systems with "cat-level" or "dog-level" intelligence in 5–10 years. Reaching human level might take 20+ years if unforeseen obstacles arise [51:24].