frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Project Genie: Experimenting with infinite, interactive worlds

https://blog.google/innovation-and-ai/models-and-research/google-deepmind/project-genie/
145•meetpateltech•2h ago•73 comments

Claude Code Daily Benchmarks for Degradation Tracking

https://marginlab.ai/trackers/claude-code/
358•qwesr123•5h ago•191 comments

AI's Impact on Engineering Jobs May Be Different Than Expected

https://semiengineering.com/ais-impact-on-engineering-jobs-may-be-different-than-initial-projecti...
39•rbanffy•1h ago•35 comments

Drug trio found to block tumour resistance in pancreatic cancer

https://www.drugtargetreview.com/news/192714/drug-trio-found-to-block-tumour-resistance-in-pancre...
77•axiomdata316•3h ago•40 comments

My Mom and Dr. DeepSeek (2025)

https://restofworld.org/2025/ai-chatbot-china-sick/
34•kieto•36m ago•4 comments

Launch HN: AgentMail (YC S25) – An API that gives agents their own email inboxes

61•Haakam21•2h ago•66 comments

OTelBench: AI struggles with simple SRE tasks (Opus 4.5 scores only 29%)

https://quesma.com/blog/introducing-otel-bench/
105•stared•3h ago•56 comments

Europe’s next-generation weather satellite sends back first images

https://www.esa.int/Applications/Observing_the_Earth/Meteorological_missions/meteosat_third_gener...
578•saubeidl•12h ago•81 comments

We can’t send mail farther than 500 miles (2002)

https://web.mit.edu/jemorris/humor/500-miles
604•giancarlostoro•15h ago•97 comments

US cybersecurity chief leaked sensitive government files to ChatGPT: Report

https://www.dexerto.com/entertainment/us-cybersecurity-chief-leaked-sensitive-government-files-to...
265•randycupertino•3h ago•141 comments

Reflex (YC W23) Senior Software Engineer Infra

https://www.ycombinator.com/companies/reflex/jobs/Jcwrz7A-lead-software-engineer-infra
1•apetuskey•2h ago

EmulatorJS

https://github.com/EmulatorJS/EmulatorJS
46•avaer•6d ago•5 comments

Tesla is committing automotive suicide

https://electrek.co/2026/01/29/tesla-committing-automotive-suicide/
103•jethronethro•1h ago•81 comments

Usenet personality

https://en.wikipedia.org/wiki/Usenet_personality
15•mellosouls•3d ago•3 comments

C++ Modules Are Here to Stay

https://faresbakhit.github.io/e/cpp-modules/
15•faresahmed•5d ago•5 comments

Apple to soon take up to 30% cut from all Patreon creators in iOS app

https://www.macrumors.com/2026/01/28/patreon-apple-tax/
916•pier25•22h ago•754 comments

Run Clawdbot/Moltbot on Cloudflare with Moltworker

https://blog.cloudflare.com/moltworker-self-hosted-ai-agent/
71•ghostwriternr•4h ago•29 comments

How to Choose Colors for Your CLI Applications (2023)

https://blog.xoria.org/terminal-colors/
109•kruuuder•4h ago•67 comments

Making niche solutions is the point

https://ntietz.com/blog/making-niche-solutions-is-the-point/
59•evakhoury•2d ago•22 comments

Heating homes with the largest particle accelerator

https://home.cern/news/news/cern/heating-homes-worlds-largest-particle-accelerator
35•elashri•3h ago•14 comments

Computing Sharding with Einsum

https://blog.ezyang.com/2026/01/computing-sharding-with-einsum/
14•matt_d•4d ago•0 comments

OpenAI's In-House Data Agent

https://openai.com/index/inside-our-in-house-data-agent
14•meetpateltech•1h ago•2 comments

The Sovereign Tech Fund Invests in Scala

https://www.scala-lang.org/blog/2026/01/27/sta-invests-in-scala.html
72•bishabosha•6h ago•49 comments

Break Me If You Can: Exploiting PKO and Relay Attacks in 3DES/AES NFC

https://www.breakmeifyoucan.com/
34•noproto•5h ago•26 comments

Playing Board Games with Deep Convolutional Neural Network on 8bit Motorola 6809

https://ipsj.ixsq.nii.ac.jp/records/229345
25•mci•5h ago•6 comments

Show HN: ShapedQL – A SQL engine for multi-stage ranking and RAG

https://playground.shaped.ai
66•tullie•2d ago•20 comments

Waymo robotaxi hits a child near an elementary school in Santa Monica

https://techcrunch.com/2026/01/29/waymo-robotaxi-hits-a-child-near-an-elementary-school-in-santa-...
155•voxadam•5h ago•272 comments

SpaceX in Merger Talks with xAI

https://www.reuters.com/world/musks-spacex-merger-talks-with-xai-ahead-planned-ipo-source-says-20...
13•m-hodges•28m ago•1 comments

Vitamin D and Omega-3 have a larger effect on depression than antidepressants

https://blog.ncase.me/on-depression/
774•mijailt•8h ago•527 comments

County pays $600k to pentesters it arrested for assessing courthouse security

https://arstechnica.com/security/2026/01/county-pays-600000-to-pentesters-it-arrested-for-assessi...
8•MBCook•34m ago•0 comments
Open in hackernews

Project Genie: Experimenting with infinite, interactive worlds

https://blog.google/innovation-and-ai/models-and-research/google-deepmind/project-genie/
145•meetpateltech•2h ago

Comments

meetpateltech•2h ago
Google Deepmind Page: https://deepmind.google/models/genie/

Try it in Google Labs: https://labs.google/projectgenie

(Project Genie is available to Google AI Ultra subscribers in the US 18+.)

nickandbro•1h ago
This could be the future of film. Instead of prompting where you don't know what the model will produce, you could use fine-grained motion controls to get the shot you are looking for. If you want to adjust the shot after, you could just checkpoint the model there, by taking a screenshot, and rerun. Crazy.
JKCalhoun•1h ago
I feel like people are already currently doing this. Essentially storyboarding first.

This guy a month ago for example: https://youtu.be/SGJC4Hnz3m0

mosquitobiten•1h ago
Every character goes forward only, permanence is still out of reach apparently.
mikelevins•1h ago
I've been experimenting with that from a slightly different angle: teaching Claude how to play and referee a pencil-and-paper RPG that I developed over about 20 years starting in the mid 1970s. Claude can't quite do it yet for reasons related to permanence and learning over time, but it can do surprisingly well up until it runs into those problems, and it's possible to help it past some obstacles.

The game is called "Explorers' Guild", or "xg" for short. It's easier for Claude to act as a player than a director (xg's version of a dungeon master or game master), again mainly because of permance and learning issues, but to the extent that I can help it past those issues it's also fairly good at acting as a director. It does require some pretty specific stuff in the system prompt to, for example, avoid confabulating stuff that doesn't fit the world or the scenario.

But to really build a version of xg on Claude it needs better ways to remember and improve what it has learned about playing the game, and what it has learned about a specific group of players in a specific scenario as it develops over time.

montebicyclelo•1h ago
Reminds me of this [1] HN post from 9 months ago, where the author trained a neural network to do world emulation from video recordings of their local park — you can walk around in their interactive demo [2].

I don't have access to the DeepMind demo, but from the video it looks like it takes the idea up a notch.

(I don't know the exact lineage of these ideas, but a general observation is that it's a shame that it's the norm for blog posts / indie demos to not get cited.)

[1] https://news.ycombinator.com/item?id=43798757

[2] https://madebyoll.in/posts/world_emulation_via_dnn/demo/

0xcb0•1h ago
I keep on repeating myself, but it feels like I'm living in the future. Can't wait to hook this up to my old Oculus glasses and let Genie create a fully realistic sailing simulator for me, where I can train sailing with realistic conditions. On boats I'd love to sail.

If making games out of these simulations work, it't be the end for a lot of big studios, and might be the renaissance for small to one person game studios.

neom•1h ago
...and then, the pneumatics in your living room.
jsheard•1h ago
Isn't this still essentially "vibe simulation" inferred from videos? Surface-level visual realism is one thing, but expecting it to figure out the exact physical mechanics of sailing just by watching boats, and usefully abstract that into a "gamified" form, is another thing entirely.
falcor84•18m ago
Why wouldn't it just hook it into something like physx?
JeremyNT•7m ago
Yeah I have a whole lot of trouble imagining this replacing traditional video games any time soon; we have actually very good and performant representations of how physics work, and games are tuned for the player to have an enjoyable experience.

There's obviously something insanely impressive about these google experiments, and it certainly feels like there's some kind of use case for them somewhere, but I'm not sure exactly where they fit in.

nsilvestri•1h ago
The bottleneck for games of any size is always whether they are good. There are plenty of small indies which do not put out good games. I don't see world models improving game design or fun factors.

If I am wrong, then the huge supply of fun games will completely saturate demand and be no easier for indie game devs to stand out.

bdbdbdb•1h ago
It's very impressive tech but subject to the same limitations as other generative AI: Inconsistency, inaccurate physics, limited time, lag, massively expensive computation.

You COULD create a sailing sim but after ten minutes you might be walking on water, or in the bath, and it would use more power than a small ferry.

There's no way this tech can run on a PS5 or anything close to it.

ziofill•1h ago
You raise good points, but I think the “it’s not good enough” stance won’t last for long.
WarmWash•46m ago
Five years is nothing to wait for tech like this. I'm sure we will see the first crop of, however small, "terminally plugged in" humans on the back of this in the relatively near future.
Avicebron•44m ago
Honestly getting a Sunfish is probably cheaper than the a VR headset if you want to "train sailing"
avaer•6m ago
> If making games out of these simulations work, it't be the end for a lot of big studios, and might be the renaissance for small to one person game studios.

I mean, if making a game eventually boils down to cooking a sufficient prompt (which to be clear, I'm not talking about text, these prompts are probably going to be more like video databases) then I'm not sure if it will be a renaissance for "one person game studios" any more than AI image generation has been a renaissance for "one person artists".

I want to be optimistic but it's hard to deny the massive distribution stranglehold that media publishing landscape has, and that has nothing to do with technology.

srameshc•1h ago
What’s the endgame here? For a small gaming studio, what are the actual implications?
educasean•1h ago
I understand the ultimate end goal to be simulation of life. A near perfect replica of the real world we can use to simulate and test medicine, economy, and social impact.
aurumque•1h ago
I would think that building a environment which can be managed by a game engine is the first pass. In a few years when we are able to render more than 60 seconds it could very well replace the game engine entirely by just rendering everything in realtime based on user interactions. The final phase is just prompts which turn directly into interactive games, maybe even multiplayer. When I see the progress we've made on things like DOOM, where it can infer the proper rendering of actions like firing weapons and even updating scores on hits and such it doesn't feel like we're very far off, a few years at most. For a game studio that could mean cutting out almost everything between keyboard and display, but for now just replacing the asset pipeline is huge.
mikewittie•20m ago
We seem to think that Genie is good at the creative part, but bad at the consistency and performance part. How hard would it be to take 60 seconds of Genie output and pipe it into a model that generates a consistent and performant 3D environment?
hiccuphippo•1h ago
It seems to be generating images in real time, not 3d scenes. It might still be useful for prototyping.
saberience•45m ago
There are collisions though and physics seemingly, so it doesn't seen to be a huge stretch that this could be used for games.
xyzsparetimexyz•1h ago
It means you should go the other way. Open world winning against smaller, handcrafted environments and stories was generally a mistake, and so is this.
mediaman•42m ago
What does it mean, that open world winning was a mistake? That the market is wrong, and peoples' preferences were incorrect, and they should prefer small handcrafted environments instead of what they seem to actually buy?
in-silico•27m ago
The endgame has nothing to do with gaming.

The goal of world models like Genie is to be a way for AI and robots to "imagine" things. Then, they could practice tasks inside of the simulated world or reason about actions by simulating their outcome.

sy26•1h ago
I have been confused for a long time why FB is not motivated enough to invest in world models, it IS the key to unblock their "metaverse" vision. And instead they let go Yann LeCun.
phailhaus•1h ago
Most people don't like putting on VR headsets, no matter what the content is. It just never broke out of the tech enthusiast niche.
observationist•1h ago
LeCun wasn't producing results. He was obstinate and insistent on his own theories and ideas which weren't, and possibly aren't, going anywhere. He refused to engage with LLMs and compete in the market that exists, and spent all his effort and energy on unproven ideas and research, which split the company's mission and competitiveness. They lost their place as one of the top 4 AI companies, and are now a full generation behind, in part due to the split efforts and lack of enthusiastic participation by all the Meta AI team. If you look at the chaos and churn at the highest levels across the industry, there's not a lot of room for mission creep by leadership, and LeCun thoroughly demonstrated he wasn't suited for the mission desired by Meta.

I think he's lucky he got out with his reputation relatively intact.

halfmatthalfcat•58m ago
Were you there or just an attentive outsider?
observationist•52m ago
Attentive outsider and acquaintance of a couple people who are or were employed there. Nothing I'm saying is particularly inside baseball, though, it's pretty well covered by all the blogs and podcasts.
richard___•39m ago
What podcast?
observationist•17m ago
Machine Learning Street Talk and Dwarkesh are excellent. Various discord communities, forums, and blogs downstream of the big podcasts, and following researchers on X keeps you in the loop on a lot of these things, and then you can watch for random interviews and presentations on youtube when you know who the interesting people and subjects are.
qwertyi0k•30m ago
Most serious researchers want to work on interesting problems like reinforcement learning or robotics or RNN or dozen other avant-garde subjects. None want to work on "boring" LLM technology, requiring significant engineering effort and huge dataset wrangling effort.
observationist•20m ago
This is true - Ilya got an exit and is engaged in serious research, but research is by its nature unpredictable. Meta wanted a product and to compete in the AI market, and JEPA was incompatible with that. Now LeCun has a lab and resources to pursue his research, and Meta has refocused efforts on LLMs and the marketplace - it remains to be seen if they'll be able to regain their position. I hope they do - open models and relatively open research are important, and the more serious AI labs that do this, the more it incentivizes others to do the same, and keeps the ones that have committed to it honest.
ezst•23m ago
Since a hot take is as good as the next one: LLMs are by the day more and more clearly understood as a "local maximum" with flawed capabilities, limited efficiency, a $trillion + a large chunk of the USA's GDP wasted, nobody even turning a profit from that nor able to build something that can't be reproduced for free within 6 months.

When the right move (strategically, economically) is to not compete, the head of the AI division acknowledging the above and deciding to focus on the next breakthrough seems absolutely reasonable.

qwertyi0k•21m ago
To be fair, this was his job description: Fundamental AI Research (FAIR) lab. Not AI products division. You can't expect marketable products from a fundamental AI research lab.
qwertox•49m ago
Isn't it more like this: JEPA looks at the video, "a dog walks out of the door, the mailman comes, dog is happy" and the next frame would need to look like "mailman must move to mailbox, dog will run happily towards him", which then an image/video generator would need to render.

Genie looks at the video, "when this group of pixels looks like this and the user presses 'jump', I will render the group different in this way in the next frame."

Genie is an artist drawing a flipbook. To tell you what happens next, it must draw the page. If it doesn't draw it, the story doesn't exist.

JEPA is a novelist writing a summary. To tell you what happens next, it just writes "The car crashes." It doesn't need to describe what the twisted metal looks like to know the crash happened.

general_reveal•30m ago
You are beyond correct. World models is what saves their Reality Labs investment. I would say if Reality Labs cannot productize World Models, then that entire project needs to be scrapped.
anxtyinmgmt•1h ago
Demis stays cooking
RivieraKid•1h ago
This would be really cool if polished and integrated with VR.
ofrzeta•1h ago
I don't know ... it's impressive and all but the result always looks kind of dead.
api•57m ago
It's super cool but I see it as a much more flexible open ended take on the idea of procedurally generated worlds where hard-coded deterministic math and rendering parameters are replaced by prompt-able models.

The deadness you're talking about is there in procedural worlds too, and it stems from the fact that there's not actually much "there." Think of it as a kind of illusion or a magic trick with math. It replicates some of the macro structure of the world but the true information content is low.

Search YouTube for procedural landscape examples. Some of them are actually a lot more visually impressive than this, but without the interactivity. It's a popular topic in the demo scene too where people have made tiny demos (e.g. under 1k in size) that generate impressive scenes.

I expect to see generative AI techniques like this show up in games, though it might take a bit due to their high computational cost compared to traditional procedural generation.

saberience•46m ago
This sort of comment reminds me about the comments by programmers two years ago.

"Sure it can write a single function but the code is terrible when it tries to write a whole class..."

phailhaus•1h ago
I have no idea why Google is wasting their time with this. Trying to hallucinate an entire world is a dead-end. There will never be enough predictability in the output for it to be cohesive in any meaningful way, by design. Why are they not training models to help write games instead? You wouldn't have to worry about permanence and consistency at all, since they would be enforced by the code, like all games today.

Look at how much prompting it takes to vibe code a prototype. And they want us to think we'll be able to prompt a whole world?

asim•33m ago
Take the positive spin. What if you could put in all the inputs and it can simulate real world scenarios you can walk through to benefit mankind e.g disaster scenarios, events, plane crashes, traffic patterns. I mean there's a lot of useful applications for it. I don't like the framing at this time, but I also get where it's going. The engineer in me is drawn to it, but the Muslim in me is very scared to hear anyone talk about creating worlds.... But again I have to separate my view from the reality that this could have very positive real world benefits when you can simulate scenarios. So I could put in a 2 pager or 10 page scenario that gets played out or simulated and allow me to walk through it. Not just predictive stuff but let's say things that have happened so I can map crime scenes or anything. In the end this performance art is because they are a product company being Benchmarked by wall street and they'll need customers for the technology but at the same time they probably already have uses for it internally.
jsheard•29m ago
> What if you could put in all the inputs and it can simulate real world scenarios you can walk through to benefit mankind e.g disaster scenarios, events, plane crashes, traffic patterns.

This is only a useful premise if it can do any of those things accurately, as opposed to dreaming up something kinda plausible based on an amalgam of vaguely related YouTube videos.

q3k•6m ago
> What if you could put in all the inputs and it can simulate real world scenarios you can walk through to benefit mankind e.g disaster scenarios, events, plane crashes, traffic patterns.

What's the use? Current scientific models clearly showing natural disasters and how to prevent them are being ignored. Hell, ignoring scientific consensus is a fantastic political platform.

seedie•28m ago
Imo they explain pretty well what they are trying to achieve with SIMA and Genie in the Google Deepmind Podcast[1]. They see it as the way to get to AGI by letting AI agents learn for themselves in simulated worlds. Kind of like how they let AlphaGo train for Go in an enormous amount of simulated games.

[1] https://youtu.be/n5x6yXDj0uo

MillionOClock•24m ago
An hybrid approach could maybe work, have a more or less standard game engine for coherence and use this kind of generative AI more or less as a short term rendering and physics sim engine.
godelski•6m ago

  > Why are they not training models to help write games instead?
Genie isn't about making games... Granted, they for some reason they don't put this at the top. Classic Google, not communicating well...

  | It simulates physics and interactions for dynamic worlds, while its breakthrough consistency enables the simulation of any real-world scenario — from robotics and modelling animation and fiction, to exploring locations and historical settings.
The key part is simulation. That's what they are building this for. Ignore everything else.

Same with Nvidia's Earth 2 and Cosmos (and a bit like Isaac). Games or VR environments are not the primary drive, the primary drive is training robots (including non-humanoids, such as Waymo) and just getting the data. It's exactly because of this that perfect physics (or let's be honest, realistic physics[0]). Getting 50% of the way there in simulation really does cut down the costs of development, even if we recognize that cost steepens as we approach "there". I really wish they didn't call them "world models" or more specifically didn't shove the word "physics" in there, but hey, is it really marketing if they don't claim a golden goose can not only lay actual gold eggs but also diamonds and that its honks cure cancer?

[0] Looking right does not mean it is right. Maybe it'll match your intuition or undergrad general physics classes with calculus but talk to a real physicist if you doubt me here. Even one with just an undergrad will tell you this physics is unrealistic and any one worth their salt will tell you how unintuitive physics ends up being as you get realistic, even well before approaching quantum. Go talk to the HPC folks and ask them why they need superocmputers... Sorry, physics can't be done from observation alone.

dyauspitr•2m ago
Why is it a dead end, you don’t meaningfully explain that. These models look like you can interact with them and they seem to replicate physics models.
ollin•1h ago
Really great to see this released! Some interesting videos from early-access users:

- https://youtu.be/15KtGNgpVnE?si=rgQ0PSRniRGcvN31&t=197 walking through various cities

- https://x.com/fofrAI/status/2016936855607136506 helicopter / flight sim

- https://x.com/venturetwins/status/2016919922727850333 space station, https://x.com/venturetwins/status/2016920340602278368 Dunkin' Donuts

- https://youtu.be/lALGud1Ynhc?si=10ERYyMFHiwL8rQ7&t=207 simulating a laptop computer, moving the mouse

- https://x.com/emollick/status/2016919989865840906 otter airline pilot with a duck on its head walking through a Rothko inspired airport

ge96•1h ago
Damn that was crazy the picture of the tabletop setup/cardboard robot and it becomes 3D interactive.
WarmWash•50m ago
The actual breakthrough with Genie is being able to turn around and look back, and seeing the same scene that was there before. A few other labs have similar world simulators, but they all struggle badly with keeping coherence of things not in view. Hence why they always walk forwards and never look around.
sfn42•36m ago
And what if I go somewhere then go back there a week later?
jsheard•34m ago
Best they can do is 60 seconds, for now at least.
nozbufferHere•20m ago
Still amazed it took ML people so long to realize they needed and explicit representation to cache stuff.
moohaad•39m ago
everyone will make his own game now
JaiRathore•37m ago
I now believe we live in a simulation
cloudflare728•37m ago
We will probably see Ready Player One in a few decades. Hoping to stay alive till then.
HardCodedBias•34m ago
Decades?

I mean, yes, the probability of having that level of tech in decades is quite high.

But the technology is moving very fast right now. It sounds crazy, but I think that there is a 50% chance of having ready player one level technology.

It's absolutely possible it will take more time to become economical.

lexandstuff•26m ago
The mass-poverty and climate changed ravaged world parts, I could definitely see.
krunck•36m ago
The more of this I see the more I want to spend time away from screens and doing those things I love to do in the real world.
MillionOClock•27m ago
I love AI but I also hope it will paradoxically make people realize the value of real life experiences and human relationships.
in-silico•30m ago
Everyone here seems too caught up in the idea that Genie is the product, and that its purpose is to be a video game, movie, or VR environment.

That is not the goal.

The purpose of world models like Genie is to be the "imagination" of next-generation AI and robotics systems: a way for them to simulate the outcomes of potential actions in order to inform decisions.

avaer•20m ago
Soft disagree; if you wanted imagination you don't need to make a video model. You probably don't need to decode the latents at all. That seems pretty far from information-theoretic optimality, the kind that you want in a good+fast AI model making decisions.

The whole reason for LLMs inferencing human-processable text, and "world models" inferencing human-interactive video, is precisely so that humans can connect in and debug the thing.

I think the purpose of Genie is to be a video game, but it's a video game for AI researchers developing AIs.

I do agree that the entertainment implications are kind of the research exhaust of the end goal.

in-silico•15m ago
Sufficiently informative latents can be decoded into video.

When you simulate a stream of those latents, you can decode them into video.

If you were trying to make an impressive demo for the public, you probably would decode them into video, even if the real applications don't require it.

Converting the latents to pixel space also makes them compatible with existing image/video models and multimodal LLMs, which (without specialized training) can't interpret the latents directly.

SequoiaHope•4m ago
Didn’t the original world models paper do some training in latent space?

I think robots imagining the next step (in latent space) will be useful. It’s useful for people. A great way to validate that a robot is properly imagining the future is to make that latent space renderable in pixels.

dyauspitr•5m ago
That’s part of it but if you could actually pull out 3D models from these worlds, it would massively speed up game development.
avaer•1m ago
You already can, check out Marble/World Labs, Meshy, and others.

It's not really as much of a boon as you'd think though, since throwing together a 3D model is not the bottleneck to making a sellable video game. You've had model marketplaces for a long time now.

analog8374•1m ago
If creating an infinite world is so trivially easy (relatively speaking) then occam suggests that this world is generated.