frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Nvidia CEO Says AI Capital Spending Is Appropriate, Sustainable

https://www.bloomberg.com/news/articles/2026-02-06/nvidia-ceo-says-ai-capital-spending-is-appropr...
1•virgildotcodes•50s ago•1 comments

Show HN: StyloShare – privacy-first anonymous file sharing with zero sign-up

https://www.styloshare.com
1•stylofront•2m ago•0 comments

Part 1 the Persistent Vault Issue: Your Encryption Strategy Has a Shelf Life

1•PhantomKey•6m ago•0 comments

Show HN: Teleop_xr – Modular WebXR solution for bimanual robot teleoperation

https://github.com/qrafty-ai/teleop_xr
1•playercc7•8m ago•1 comments

The Highest Exam: How the Gaokao Shapes China

https://www.lrb.co.uk/the-paper/v48/n02/iza-ding/studying-is-harmful
1•mitchbob•13m ago•1 comments

Open-source framework for tracking prediction accuracy

https://github.com/Creneinc/signal-tracker
1•creneinc•14m ago•0 comments

India's Sarvan AI LLM launches Indic-language focused models

https://x.com/SarvamAI
2•Osiris30•16m ago•0 comments

Show HN: CryptoClaw – open-source AI agent with built-in wallet and DeFi skills

https://github.com/TermiX-official/cryptoclaw
1•cryptoclaw•18m ago•0 comments

ShowHN: Make OpenClaw respond in Scarlett Johansson’s AI Voice from the Film Her

https://twitter.com/sathish316/status/2020116849065971815
1•sathish316•20m ago•1 comments

CReact Version 0.3.0 Released

https://github.com/creact-labs/creact
1•_dcoutinho96•22m ago•0 comments

Show HN: CReact – AI Powered AWS Website Generator

https://github.com/creact-labs/ai-powered-aws-website-generator
1•_dcoutinho96•23m ago•0 comments

The rocky 1960s origins of online dating (2025)

https://www.bbc.com/culture/article/20250206-the-rocky-1960s-origins-of-online-dating
1•1659447091•28m ago•0 comments

Show HN: Agent-fetch – Sandboxed HTTP client with SSRF protection for AI agents

https://github.com/Parassharmaa/agent-fetch
1•paraaz•30m ago•0 comments

Why there is no official statement from Substack about the data leak

https://techcrunch.com/2026/02/05/substack-confirms-data-breach-affecting-email-addresses-and-pho...
8•witnessme•34m ago•1 comments

Effects of Zepbound on Stool Quality

https://twitter.com/ScottHickle/status/2020150085296775300
2•aloukissas•37m ago•1 comments

Show HN: Seedance 2.0 – The Most Powerful AI Video Generator

https://seedance.ai/
2•bigbromaker•40m ago•0 comments

Ask HN: Do we need "metadata in source code" syntax that LLMs will never delete?

1•andrewstuart•46m ago•1 comments

Pentagon cutting ties w/ "woke" Harvard, ending military training & fellowships

https://www.cbsnews.com/news/pentagon-says-its-cutting-ties-with-woke-harvard-discontinuing-milit...
6•alephnerd•49m ago•2 comments

Can Quantum-Mechanical Description of Physical Reality Be Considered Complete? [pdf]

https://cds.cern.ch/record/405662/files/PhysRev.47.777.pdf
1•northlondoner•49m ago•1 comments

Kessler Syndrome Has Started [video]

https://www.tiktok.com/@cjtrowbridge/video/7602634355160206623
2•pbradv•52m ago•0 comments

Complex Heterodynes Explained

https://tomverbeure.github.io/2026/02/07/Complex-Heterodyne.html
4•hasheddan•52m ago•0 comments

MemAlign: Building Better LLM Judges from Human Feedback with Scalable Memory

https://www.databricks.com/blog/memalign-building-better-llm-judges-human-feedback-scalable-memory
1•superchink•1h ago•0 comments

CCC (Claude's C Compiler) on Compiler Explorer

https://godbolt.org/z/asjc13sa6
2•LiamPowell•1h ago•0 comments

Homeland Security Spying on Reddit Users

https://www.kenklippenstein.com/p/homeland-security-spies-on-reddit
34•duxup•1h ago•6 comments

Actors with Tokio (2021)

https://ryhl.io/blog/actors-with-tokio/
1•vinhnx•1h ago•0 comments

Can graph neural networks for biology realistically run on edge devices?

https://doi.org/10.21203/rs.3.rs-8645211/v1
1•swapinvidya•1h ago•1 comments

Deeper into the shareing of one air conditioner for 2 rooms

1•ozzysnaps•1h ago•0 comments

Weatherman introduces fruit-based authentication system to combat deep fakes

https://www.youtube.com/watch?v=5HVbZwJ9gPE
3•savrajsingh•1h ago•0 comments

Why Embedded Models Must Hallucinate: A Boundary Theory (RCC)

http://www.effacermonexistence.com/rcc-hn-1-1
1•formerOpenAI•1h ago•2 comments

A Curated List of ML System Design Case Studies

https://github.com/Engineer1999/A-Curated-List-of-ML-System-Design-Case-Studies
3•tejonutella•1h ago•0 comments
Open in hackernews

Robotopia: A 3D, first-person, talking simulator

https://elbowgreasegames.substack.com/p/introducing-robotopia-a-3d-first
104•psawaya•1mo ago

Comments

4b11b4•4w ago
I'm imagining a version of this where you have to use various prompt- or data-centric attacks to navigate scenarios
tom_0•4w ago
We want to gamify prompt hacking and give people an UI to add/remove chunks of the system prompt. It'll be unlocked by collecting widgets around the place.
fosterfriends•4w ago
I’m so excited to see LLMs used more creatively in video games. So many new mechanics can be unlocked with LLMs as judges
psawaya•4w ago
Agreed!

Some other cool ones I've seen: https://store.steampowered.com/app/2542850/1001_Nights/ https://www.playsuckup.com/

shminkle•4w ago
Robotopia was very inspired by suck up. First LLM game that kinda cracked the 3d world
wavemode•4w ago
I like the concept. Though, they couldn't have found better text-to-speech voices? Or is it meant to be humorous how bad they are.
tom_0•4w ago
It's a stylistic choice for sure. A little better than that is straight in uncanny valley, and human-level is too high latency and too expensive for us. We found that this level of crappy works great, in practice, plus it runs on-device! We use Rhasspy Piper to generate them.
Hammershaft•4w ago
I would personally avoid voices that skew too close to common tiktok TTS ai. Currently the heavy robots with the lower bassier voices sell that clunky robot voice vibe much better, but some of the more generic voices immediately take me out.
tom_0•4w ago
Unfortunately, they are close because some of them ARE tiktok AI voices you heard! I'm working on hiring VAs to make custom datasets, though. We'll have our own unique voices by 1.0 for sure.
lifetimerubyist•4w ago
Another game that has LLM powered NPCs is the f2p action game from China called "Where Winds Meet" and players came up with all sorts of hilarious ways to cheat quests and other fun stuff via prompt injections.

https://www.dexerto.com/gaming/where-winds-meet-players-are-...

https://www.rockpapershotgun.com/where-winds-meet-player-con...

shminkle•4w ago
I had no idea this game had LLM NPCs. Interesting
tom_0•4w ago
Hey, Tommaso here, I'm one of the founders of the Robotopia studio. I didn't expect to see this here! Ask me anything :)
Scaevolus•4w ago
Are the LLMs run on-device, or does this use cloud compute?

(Off-topic AMA question: Did you see my voxel grid visibility post?)

tom_0•4w ago
The "big" one is Llama3.3-70b on the cloud, right now. On GroqCloud in fact, but we have a cloud router that gives us several backups if Groq abandoned us.

We use a ton of smaller models (embeddings, vibe checks, TTS, ASR, etc) and if we had enough scale we'll try to run those locally for users that have big enough GPUs.

(You mean the voxel grid visibility from 2014?! I'm sure I did at the time... but I left MC in 2020 so don't even remember my own algorithm right now)

Scaevolus•4w ago
Shipping GPU-accelerated ML models in games looks difficult, are there any major examples other than vendor-locked upscaling like DLSS or FSR?

(Yep! https://cod.ifies.com/voxel-visibility/ )

tom_0•4w ago
Yeah it's extremely difficult right now, especially for a Windows game that can't have players install Pytorch and the Cuda Toolkit!

ONNX and DirectML seem sort of promising right now, but it's all super raw. Even if that worked, local models are bottlenecked by VRAM and that's never been more expensive. And we need to fit 6gb of game in there as well. Even if _that_ worked, we'd need to timeslice the compute inside the frame so that the game doesn't hang for 1 second. And then we'd get to fight every driver in existence :) Basically it's just not possible unless you have a full-time expert dedicated to this IMO. Maybe it'll change!

About the voxel visibility: yeah that was awesome, I remember :) Long story short MC is CPU-bound and the frustum clippings' CPU cost didn't get paid off by the reduced overdraw, so it wasn't worth it. Then a guy called Jonathan Hoof rewrote the entire thing to be separated in a 360° scan done on another thread when you changed chunk + a in-frustum walk that worked completely differently, and I don't remember the details but it did fix the ravine issue entirely!

Scaevolus•4w ago
GGML is another neat ML abstraction layer, but I don't think much work has been dedicated to the Windows port.
tom_0•4w ago
GGML still runs on llama.cpp, and that still requires CUDA to be installed, unfortunately. I saw a PR for DirectML, but I'm not really holding my breath.
lostmsu•4w ago
You don't have to install the whole CUDA. They have a redistributable.
tom_0•3w ago
Oh, I can't believe I missed that! That makes whisper.cpp and llama.cpp valid options if the user has Nvidia, thanks.
lostmsu•3w ago
Whisper.cpp and llama.cpp also work with Vulkan.
tom_0•3w ago
Yeah, I researched this and I absolutely missed this whole part. To my defense I looked into this in 2023 which is ages ago :) Looks like local models are getting much more mature.
Charmunk•4w ago
Hey! Robotopia looks awesome, I'm excited to try it out when it launches. How do you convert the LLM output to actions? Is there more broad actions available (ie like creating any object, moving anything anywhere) exposed to the LLM or is it more specific tools it can call?
tom_0•4w ago
Thanks :) It may sound insane but we convert actions to Python functions then ask the LLM to write a python script that actually runs in IronPython inside the game. Then we have a visual Behavior Tree system to let our designer define the actions. So yeah, they got a bunch of general actions like walk, talk, follow, interact etc.

PS: I think MCP/Tool Calls are a boondoggle and LLMs yearn to just run code. It's crazy how much better this works than JSON schema etc.

woodrowbarlow•3w ago
uhhh... you're running generated code on your customers' PCs? what kind of sandboxing do you have?
tom_0•3w ago
Fair reaction tbh. Right now there's a time watchdog + I'm entirely disabling all I/O and import, But going forward I want to replace it with a proper sandboxing tech... things I looked into are V8 isolates, compilation to WASM, implementing our own gutted python interpreter, spinning up a locked down process, and others. I'm definitely aware of the risk here. The good news is that unless we get pwned, LLMs are very unlikely to write malicious code for the user.
vannevar•3w ago
>...LLMs are very unlikely to write malicious code for the user.

Do you have any idea what the actual probability is? Because if millions of people start using the system, 'very unlikely' can turn into 'virtual certainty' pretty quickly.

woodrowbarlow•3w ago
yikes
dandelionv1bes•4w ago
This is fantastic. I think it’s nailed in the substack what was missing from a lot of these LLM driven NPCs that did not feel authentic. I have a couple of follow-up questions on specifics relating to analysis of behaviour with LLMs (in game-dev myself). Would it be possible to speak to you directly on them?
tom_0•4w ago
Thanks :) If you want I'm on the discord linked on our landing page, it's fun stuff to talk about!
dandelionv1bes•4w ago
Amazing! Thanks will join.
Tossrock•4w ago
Do you have a budget per-player of cloud usage? What happens if people really like the game and play it so much it starts getting expensive to keep running? I guess at $0.79 / Mtok llama70B is pretty affordable, but a per-player opex seems hard to handle without a subscription model.
tom_0•4w ago
Our initial plan was to simply ask enough for the game that the price would cover the costs on average... but that means that we're basically encouraged to have people play the game as little as possible? We're looking into some kind of subscription now, it sounds weird but I do think it's a better incentive in this case. Plus we can actually ask for less upfront.
d3rockk•3w ago
This has insanely incredible potential for language learning. Do you plan to implement support for additional languages?
tom_0•3w ago
Yes, but every language is going to be a "port", not something contracted out like traditional localization. I haven't decided how exactly but language conversion will land somewhere between these two extremes: 1. (expensive) pick a suite of "native" models (eg. models from China), TTS, ASR. Rewrite all the prompts in the target language. Revalidate all characters by hand 2. (cheap) slap a translation model around input and output and let the game run in English internally. My gut feeling is that this could have very poor results though and increase latency.

It's definitely a research project, this has never been done before.

AlphaWeaver•3w ago
Do you think there's a path where you can pregenerate popular paths of dialogue to avoid LLM inference costs for every player? And possibly pair it with a lightweight local LLM to slightly adapt the responses? While still shelling out to a larger model when users go "off the rails"?
themanmaran•3w ago
Not the founder, but having run conversational agents at decent scale, I don't think the cost actually matters much early on.

It's almost always better to pay more for the smarter model, than to potentially give a worse player experience.

If they had 1M+ players there would certainly be room to optimize, but starting out you'd certainly spend more trying engineer the model switcher than you would save in token costs.

tom_0•3w ago
I agree, trying to save on costs early on is basically betting against things getting better. Not only that but in almost every case people prefer the best model they can get!

Not only that but I think our selling point is rewarding creativity with emergent behavior. I think baked dialogue would turn into traditional game with worse writing pretty quick and then you got a problem. For example, this AI game here does multiple choices with a local model and people seem a bit mild about it.

We could use it to cache popular QA, but in my experience humans are insane and nobody ever says even remotely similar things to robots :)

[1] https://store.steampowered.com/app/2828650/The_Oversight_Bur...

johnea•4w ago
Max Headroom?
malchow•4w ago
This is an incredible foretaste of what AI can enable in gaming. Not replacing humans (the creators here are former leaders from Minecraft), but rather simply unlocking more fun gameplay by offering creativity, humor, and branched storytelling customized to the player.
Workaccount2•3w ago
I strongly suspect that the advent of LLMs stalled the new elder scrolls game another 5-6 years.
malchow•3w ago
What's interesting is you might not want to see de novo AI-generated storytelling (slop factor), but you might really like the way AI can make a story crafted by humans more interactive.
mavamaarten•3w ago
It's going to be a balance act. There's going to be plenty of companies that are just going to be greedy and will generate AI slop without checking, which will undoubtedly tank the quality of many games in the near future.

When applied smartly and with human supervision, I think that AI could easily help humans build game worlds and stories that were previously impossible to achieve.

tom_0•3w ago
Hah from my knowledge of traditional AAA, there is 0 chance any AAA in development right now uses LLMs. A lot of them don't even use it for coding and gamedevs' mood about AI is abysmal.
Workaccount2•3w ago
Let me just remind you that Microsoft owns the elder scrolls franchise now, for better or worse.
tom_0•3w ago
I know, but it's a bit of an unstoppable force vs immovable object situation unless something changed. If they do it, I hope it'll be better than Copilot integrations :)
dyauspitr•3w ago
Why? Because they feel like it needs to be a part of the game?
gimun•3w ago
Nice concept and good try!
tom_0•3w ago
Thanks :)
Rooster61•3w ago
This looks like a lot of fun. Is there a way to use text rather than speech for input? I'm not particularly fond of my voice getting sent to an LLM.
tom_0•3w ago
Yeah, there's a toggle to type you can switch at any time, it actually lowers latency.