frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The 67-Second OpenTelemetry Problem

https://getlawrence.com/blog/The-67-Second-OpenTelemetry-Problem
1•Itayoved•1m ago•0 comments

Asta: Accelerating science through trustworthy agentic AI

https://allenai.org/blog/asta
1•kjhughes•2m ago•0 comments

Browserbase has a 29% failure rate on basic page loads

https://anchorbrowser.io/blog/page-load-reliability-on-the-top-100-websites-in-the-us
1•jmarbach•2m ago•0 comments

Atmos: A language for structured concurrency and event-driven programming, based

https://github.com/atmos-lang/atmos
1•fanf2•4m ago•0 comments

Will AI Replace Human Thinking? The Case for Writing and Coding Manually

https://www.ssp.sh/brain/will-ai-replace-humans/
2•articsputnik•6m ago•0 comments

Gabbard Blindsided CIA over Revoking Clearance of Undercover Officer

https://www.wsj.com/politics/national-security/tulsi-gabbard-blindsided-cia-over-revoking-clearan...
1•JumpCrisscross•6m ago•0 comments

SEC Filings Data Visualizer

https://nomas.fyi/news
1•nomas_research•6m ago•1 comments

Chinese Money Launderers Are Moving Billions Through U.S. Banks

https://www.wsj.com/finance/regulation/chinese-money-launders-are-moving-billions-through-u-s-ban...
1•JumpCrisscross•6m ago•0 comments

Canada's 2023 Wildfires Pushed Air Pollution to Decade-Level Highs

https://financialpost.com/pmn/business-pmn/canadas-2023-wildfires-pushed-air-pollution-to-decade-...
1•neom•7m ago•0 comments

Biodegradable ultrasound contrast tape for tracing intestinal motility

https://www.nature.com/articles/s41467-025-63310-8
1•bookofjoe•8m ago•0 comments

Show HN: Open-source Next.js 15 boilerplate – auth, DB, intl, tests, monitoring

1•creativedg•8m ago•0 comments

gpt-oss is a great model

https://twitter.com/ggerganov/status/1961070963107188849
1•tosh•8m ago•0 comments

When I play games with how I save money, I'm less inclined to spend it

https://pockets.bearblog.dev/20250827/
1•warrenm•9m ago•0 comments

Chaos and Coherence in Business

https://commoncog.com/chaos-and-coherence-in-business/
1•cjbarber•9m ago•0 comments

Show HN: I built a Chrome extension–export Facebook birthdays to text reminders

https://chromewebstore.google.com/detail/export-facebook-birthdays/offiohhackmhmhinaaacooffkeaaodgh
2•samfeldman•10m ago•0 comments

Only Experts Can Write Good Prompts

https://www.vincentschmalbach.com/only-experts-can-write-good-prompts/
1•vincent_s•10m ago•0 comments

Scientists just developed a new AI modeled on the human brain

https://www.livescience.com/technology/artificial-intelligence/scientists-just-developed-an-ai-mo...
1•kmdupree•11m ago•0 comments

China a $50B/yr Nvidia market if US would allow competitive product sales: Huang

https://www.theregister.com/2025/08/27/nvidia_q2_china/
1•rntn•12m ago•0 comments

Reading for pleasure is going out of style

https://www.axios.com/2025/08/27/reading-books-pleasure-data
1•Brajeshwar•14m ago•0 comments

Vitamin D supplementation of 2000 IU daily is recommended for adults

https://www.mdpi.com/2072-6643/16/3/391
3•toomuchtodo•14m ago•1 comments

The Curious History of New England's Hermit Tourism

https://www.atlasobscura.com/articles/new-england-hermit-tourism
1•Brajeshwar•14m ago•0 comments

Scientist claims mega-flood wiped out early civilisations

https://thetatva.in/science/scientist-claims-mega-flood-wiped-out-early-civilisations-nothing-in-...
1•Brajeshwar•15m ago•0 comments

How Apple AirPods Work [video]

https://www.youtube.com/watch?v=PB_8dGKh9JI
1•ingve•15m ago•0 comments

Precognition and Time

https://www.popularmechanics.com/science/a65653221/science-of-precognition-explained/
1•gmays•15m ago•0 comments

Koko Analytics 2.0

https://www.kokoanalytics.com/2025/08/27/koko-analytics-version-2-is-here/
1•pentagrama•16m ago•0 comments

ArtSci MagiCalc – Computer Ads from the Past

https://computeradsfromthepast.substack.com/p/artsci-magicalc
1•rbanffy•18m ago•0 comments

Multiple sclerosis: Triggers in the gut flora

https://www.mpg.de/24685137/0507-psy-multiple-sclerosis-triggers-in-the-gut-flora-155111-x
1•debesyla•18m ago•0 comments

The brain weaves time into memories with unique neural bookmarks

https://neurosciencenews.com/time-mapping-memory-29409/
1•geox•21m ago•0 comments

White Dwarf Stars Could Create Surprisingly Common Long Lived Habitable Zones

https://www.universetoday.com/articles/white-dwarf-stars-could-create-surprisingly-common-long-li...
1•rbanffy•21m ago•0 comments

New Research Effort Could Boost Nuclear Fuel Performance

https://www.pnnl.gov/news-media/new-research-effort-could-boost-nuclear-fuel-performance
1•PaulHoule•22m ago•0 comments
Open in hackernews

Rendering a Game in Real-Time with AI

https://blog.jeffschomay.com/rendering-a-game-in-real-time-with-ai
67•jschomay•2h ago

Comments

sjsdaiuasgdia•2h ago
The "real-time" version looks awful with constantly shifting colors, inconsistently sized objects, and changing interpretations of the underlying data, resulting in what I would consider an unplayable game vs the original ASCII rendering.

The "better" version renders at a whopping 4 seconds per frame (not frames per second) and still doesn't consistently represent the underlying data, with shifting interpretations of what each color / region represents.

faeyanpiraat•1h ago
Yeah, but I find this fascinating regardless.

This is getting into the direction of a kind of simulation where stuff is not determined by code but a kind of "real" physics.

roxolotl•1h ago
Why does using a language/vision model feel more “real” to you than using equations which directly describe our understandings of physics?
curl-up•1h ago
Not OP, but I have long thought of this type of approach (underlying "hard coded" object tracking + fuzzy AI rendering) to be the next step, so I'll respond.

The problem with using equations is that they seem to have plateaued. Hardware requirements for games today keep growing, and yet every character still has that awful "plastic skin", among all the other issues, and for a lot of people (me included) this creates heavy uncanny-valley effects that makes modern games unplayable.

On the other hand, images created by image models today look fully realistic. If we assume (and I fully agree that this is a strong and optimistic assumption) that it will soon be possible to run such models in real time, and that techniques for object permanence will improve (as they keep improving at an incredible phase right now), then this might finally bring us to the next level of realism.

Even if realism is not what you're aiming for, I think it's easy to imagine how this might change the game.

jsheard•40m ago
You're comparing apples to oranges, holding up today's practical real-time rendering techniques against a hypothetical future neural method that runs orders of magnitude faster than anything available today. If we grant "equation based" methods the same liberty then we should be looking ahead to real-time path-tracing research, which is already at the borderline of practicality on high-end hardware.
curl-up•31m ago
The question was "why does it feel more real", and I answered that - because the best AI generated images today feel more real than the best 3D renders, even when they take all the compute in the world to finish. So I can imagine that trend going forward into real-time rendering as well.

I did not claim that AI-based rendering will overcome traditional methods, and have even explicitly said that this is a heavy assumption, but explained why I see it as exciting.

sjsdaiuasgdia•9m ago
I find it odd that you're that bothered by uncanny valley effects from game rendering but apparently not by the same in image model outputs. They get little things wrong all the time and it puts me off the image almost instantly.
actuallyalys•1h ago
Yeah, as interesting as the concept is, the lack of frame to frame consistency is a real problem. It also seems like the computing requirements would be immense—the article mentions burning through $10 in seconds.
elpocko•25m ago
You can do this at home on your own computer with a 40x0 consumer GPU at 1-2 fps. You have to choose a suitable diffusion model, there are models that provide sub-second generation of 1024x1024 images. The computing requirements and electricity costs are the same as when running a modern game.
harph•43m ago
It seems it's because OP is generating the whole screen every frame / every move. Of course that will give inconsistent results.

I wonder if this approach would work better:

1. generate the whole screen once

2. on update, create a mask for all changed elements of the underlying data

3. do an inpainting pass with this mask, with regional prompting to specify which parts have changed how

4. when moving the camera, do outpainting

This might not be possible with cloud based solutions, but I can see it being possible locally.

johnfn•42m ago
> The "real-time" version looks awful, etc

Dang man it's just a guy showing off a neat thing he did for fun. This reaction seems excessive.

ozmodiar•18m ago
I like the idea behind https://oasis-ai.org/ where you can actually try to take advantage of the 'dream logic' inconsistency of each frame being procedurally generated based on the last one. For example, instead of building a house, build the corner of a house, look at that, then look back up and check if it hallucinated the rest of your ephemeral house for you. Of course that uses AI as the entire gameplay loop and not just a graphics filter. It's also... not great, but an interesting concept that I could see producing a fun dream logic game in the future.
g105b•1h ago
I've been trying to achieve the opposite of this project: render scenes in ASCII/ANSI in the style of old BBS terminal games. I've had terrible success so far. All the AI models I've tried only understand the concept of "pixel art" and not ASCII/ANSI graphics such as what can be seen on https://www.bbsing.com/ , https://16colo.rs , or on Reddit's r/ANSIart/ .

If anyone has any tips for how I could achieve this, I would love to hear your ideas.

elpocko•1h ago
Do you mean you want to use AI to generate new scenes in ANSI-art style, or do you mean you want to use AI to render pre-existing scenes as ANSI art?
cbm-vic-20•1h ago
...where "ASCII" means an image made up of a grid of elements from a limited set of glyphs.
tantalor•27m ago
And those glyphs are not ASCII

This is ASCII: https://commons.wikimedia.org/wiki/File:ASCII-Table-wide.svg

mason_mpls•16m ago
If we’re talking about dwarf fortress it uses an old IBM charset, assuming this is some branch off that
panki27•1h ago
I'm pretty sure the generation could easily run locally on a low-to-mid tier graphics card.

While it might take a bit longer to generate, you're still saving network and authentication latency.

echelon•42m ago
This was built in September 2022, and it's still pretty mind-blowing today:

https://madebyoll.in/posts/game_emulation_via_dnn/demo/

https://madebyoll.in/posts/game_emulation_via_dnn/

Hook world state up to a server and you have multiplayer.

2025 update:

https://madebyoll.in/posts/world_emulation_via_dnn/

https://madebyoll.in/posts/world_emulation_via_dnn/demo/

stego-tech•37m ago
…but then you just have a graphics card, built to render graphics, that you could tap instead through traditional tooling that’s already widely known and which produces consistent output via local assets.

While the results of the experiment here are interesting from an academic standpoint, it’s the same issue as remote game streaming: the amount of time you have to process input from the player, render visuals and sound, and transmit it back to the player precludes remote rendering for all but the most latency-insensitive games and framerates. It’s publishers and IP owners trying to solve the problem of ownership (in that they don’t want anyone to own anything, ever) rather than tackling any actually important issues (such as inefficient rendering pipelines, improving asset compression and delivery methods, improving the sandboxing of game code, etc).

Trying to make AI render real-time visuals is the wrongest use of the technology.

steveruizok•42m ago
We did a similar thing at tldraw with Draw Fast (https://drawfast.tldraw.com/) and it was very fun. Inspired a few knock offs too. We had to shut it down because it was getting popular on Russian Reddit. A related project Lens (https://lens.tldraw.com) also used the same technique, but in a collaborative drawing app.

At the peak, when we were streaming back video from Fal and getting <100ms of lag, the setup produced one of the most original creative experiences I’d ever had. I wish these sorts of ultra-fast image generators received more attention and research because they do open up some crazy UX.

echelon•34m ago
LCM is what Krea used to gain massive momentum and raise their first $30M.

The tactile reaction to playing with this tech is that it feels utterly sci-fi. It's so freaking cool. Watching videos does not do it justice.

Not enough companies or teams are trying this stuff. This is really cool tech, and I doubt we've seen the peak of what real time rendering can do.

The rendering artifacts and quality make the utility for production use cases a little questionable, but it can certainly do "art therapy" and dreaming / ideation.

hiatus•20m ago
Is there any chance you'd open up the source for those projects so others can play with them?
myflash13•37m ago
This is a pre-cursor to a dystopian future where reality will be a game generated in realtime at 60 FPS and streamed to your brain over Neuralink.
lm28469•35m ago
Some people on HN will surely cheer for it, it's the peak of efficiency, you don't even have to move anymore!
mason_mpls•15m ago
I think that’ll be one of the few good parts of it imho
jebarker•14m ago
In some ways that’s a lot like how consciousness works isn’t it?
coolKid721•33m ago
I do not get the point of this at all, why not just generate game assets and run them in an engine? With this format there would be no regularity that the thing you saw before will look the same (and that is not a fixable problem).

Actually figuring out and improving AI approaches for generating consistent and decent quality game assets is actually something that will be useful, this I have no idea the point of past a tech demo (and for some reason all the "ai game" people do this approach).

abbycurtis33•21m ago
The tech will improve to far exceed the capabilities of a game engine. Real time improvisation and infinite choices, scope, etc.

It makes no sense when people say AI can't do this or that. It will do it next week.

lukan•17m ago
"It makes no sense when people say AI can't do this or that. It will do it next week."

So full self driving vecicles will be finally ready next week then? Great to hear, though to be honest, I remain sceptical.

kayamon•15m ago
Waymos are driving themselves around several cities right now.
Janicc•12m ago
You really need to update your language model because self driving cars have been driving around on their own for at least a year now
jdiff•7m ago
"Full self driving" was the term used and I believe the distinction is relevant to the point being made.
sho_hn•4m ago
I understand the point you're making, but I think it's not a good one.

The failure mode for getting a self-driving car right is grave. The failure mode for rendering game graphics imperfectly is to require a bit of suspension of disbelief (it's not a linear spectrum given the famous uncanney valley, etc., I'm aware). Games already have plenty of abstract graphics, invisible walls, and other cludges that require buy-in from users. It's a lot easier to scale that wall.

127•11m ago
An interactive feedback loop that handles various edge cases of AI, rendering it, asset loading and display, keeping track of global data, user input, etc. -- is still a game engine.
GPerson•5m ago
I’m looking forward to the day when magical thinking such as this gets grounded again. That is when the real work will start anew.
dagi3d•19m ago
hack, learn and have fun, that's it.
sho_hn•14m ago
> I do not get the point of this at all

Dunno, this seems like an avenue definitely worth exploring.

Plenty of game applications today already have a render path of input -> pass through AI model -> final image. That's what the AI-based scaling and frame interpolation features like DLSS and FSR are.

In those cases, you have a very high-fidelity input doing most of the heavy lifting, and the AI pass filling in gaps.

Experiments like the OP's are about moving that boundary and "prompting" with a lower-fidelity input and having the model do more.

Depending on how well you can tune and steer the model, and where you place the boundary line, this might well be a compelling and efficient compute path for some applications, especially as HW acceleration for model workloads improves.

No doubt we will see games do variations of this theme, just like games have thoroughly explored other technology to generate assets from lower-fidelity seeds, e.g. classical proc gen. This is all super in the wheelhouse of game development.

Some kind of AI-first demoscene would be a pretty cool thing too. What's a trained model if not another fancy compressor?

turnsout•13m ago
It's an interesting tech demo—I think one interesting use case for AI rendering is changing the style on the fly. For example, a certain power-up could change the look to a hyper-saturated comic book style. Definitely achievable with traditional methods, but because AI is prompt-based, you could combine or extend styles dynamically.
127•13m ago
Visually speaking, there's always visual issues in tying disparate assets together in a seamless fashion. I can see how AI could be easily used to "hide the seams" so to speak. I think a hybrid approach would be an improvement definitely.
mason_mpls•17m ago
Now dwarf fortress can eat your CPU, Memory, and GPU. Exciting news.