frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: I built a "Do not disturb" Device for my home office

https://apoorv.page/blogs/over-engineered-dnd
50•quacky_batak•4d ago•19 comments

Show HN: Free and local browser tool for designing gear models for 3D printing

https://gears.dmtrkovalenko.dev
22•neogoose•11h ago•6 comments

Show HN: KeelTest – AI-driven VS Code unit test generator with bug discovery

https://keelcode.dev/keeltest
23•bulba4aur•6h ago•8 comments

Show HN: SMTP Tunnel – A SOCKS5 proxy disguised as email traffic to bypass DPI

https://github.com/x011/smtp-tunnel-proxy
116•lobito25•19h ago•40 comments

Show HN: Comet MCP – Give Claude Code a browser that can click

https://github.com/hanzili/comet-mcp
24•hanzili•3d ago•25 comments

Show HN: Seapie – a Python debugger where breakpoints drop into a REPL

https://github.com/hirsimaki-markus/seapie
7•markushirsimaki•1h ago•1 comments

Show HN: Tylax – A bidirectional LaTeX to Typst converter in Rust

https://github.com/scipenai/tylax
11•democat•4h ago•0 comments

Show HN: VaultSandbox – Test your real MailGun/SES/etc. integration

https://vaultsandbox.com/
52•vaultsandbox•1d ago•9 comments

Show HN: I built a 3D World Map and multiplayer geography game using Three.js

https://www.mixora.xyz
2•qwrwenm•3h ago•0 comments

Show HN: Make audio loops online

https://makeloops.online/
63•bilalba•2d ago•22 comments

Show HN: Mantic.sh – A structural code search engine for AI agents

https://github.com/marcoaapfortes/Mantic.sh
73•marcoaapfortes•1d ago•34 comments

Show HN: 48-digit prime numbers every git commit

https://textonly.github.io/git-prime/
65•keepamovin•6d ago•52 comments

Show HN: KektorDB – Lightweight, Embeddable Vector+Graph Database Written in Go

https://github.com/sanonone/kektordb
2•san0n•4h ago•1 comments

Show HN: Can you hit replacement? A fertility SIM with cited sources

https://www.tfrsim.com/
3•joshuafkon•5h ago•0 comments

Show HN: Prism.Tools – Free and privacy-focused developer utilities

https://blgardner.github.io/prism.tools/
361•BLGardner•1d ago•99 comments

Show HN: Stash – Sync Markdown Files with Apple Notes via CLI

https://github.com/shakedlokits/stash
68•shuka•1d ago•21 comments

Show HN: Tailsnitch – A security auditor for Tailscale

https://github.com/Adversis/tailsnitch
271•thesubtlety•2d ago•28 comments

Show HN: Arabic Calligraphy Generator – 11 styles, free, no signup

https://arabiccalligraphygenerator.online/
2•zaochen1224•5h ago•1 comments

Show HN: A simple way to find open source issues to contribute to

https://K-dash.github.io/contrib-fyi/
2•K-dash•5h ago•0 comments

Show HN: Milkyboard – Synth Keyboard with Milkdrop Visualizer

https://milkyboard.com/
2•amadeuspagel•5h ago•0 comments

Show HN: Deep learning without gradient descent, 500 layers, no skip connections

https://github.com/xolod7/polyharmonic-cascade
4•Yuriy_Bakhvalov•5h ago•1 comments

Show HN: Metabase-Impact – Find which Metabase questions break before you deploy

https://github.com/yukipeters/metabase-impact
3•yukipeters•6h ago•0 comments

Show HN: DoNotNotify – Log and intelligently block notifications on Android

https://donotnotify.com/
339•awaaz•2d ago•163 comments

Show HN: llmgame.ai – The Wikipedia Game but with LLMs

https://www.llmgame.ai
24•jmcallister•1d ago•21 comments

Show HN: Jax-JS, array library in JavaScript targeting WebGPU

https://ss.ekzhang.com/p/jax-js-an-ml-library-for-the-web
79•ekzhang•1d ago•21 comments

Show HN: Foundertrace – chain of YC startups founded by its employees

https://foundertrace.com/
38•loondri•3d ago•12 comments

Show HN: Put Greenland on the Moon (interactive map for size compare)

https://github.com/ObservedObserver/world-map-reality
3•loa_observer•6h ago•1 comments

Show HN: GPU Cuckoo Filter – faster queries than Blocked Bloom, with deletion

https://github.com/tdortman/cuckoo-filter
31•tdortman•21h ago•3 comments

Show HN: Cited AI – AI answers with citations linking to exact source passages

https://getcitedai.com
6•collin1•7h ago•0 comments

Show HN: I built "Google" for searching Shadcn blocks on the web

https://shoogle.dev/
21•ali-dev•5d ago•3 comments
Open in hackernews

Show HN: Server-rendered multiplayer games with Lua (no client code)

https://cleoselene.com/
79•brunovcosta•2d ago
Hey folks — here’s a small experiment I hacked together over the weekend:

https://cleoselene.com/

In short, it’s a way to build multiplayer games with no client-side game logic. Everything is rendered on the server, and the game itself is written as simple Lua scripts.

I built this to explore a few gamedev ideas I’ve been thinking about while working on Abstra: - Writing multiplayer games as if they were single-player (no client/server complexity) - Streaming game primitives instead of pixels, which should be much lighter - Server-side rendering makes cheating basically impossible - Game secrets never leave the server

This isn’t meant to be a commercial project — it’s just for fun and experimentation for now.

If you want to try it out, grab a few friends and play here: https://cleoselene.com/astro-maze/

Comments

ghxst•2d ago
IMO eliminating as much client side authority as possible is a very good foundation for MMOs where the latency is acceptable or factored into all aspects of the game (looking at old school runescape as an example). Very cool project!
brunovcosta•2d ago
Thank you!
duduzeta•2d ago
Cool!! I'm trying to test here, but other ships keep attacking me and I don't know how to shoot :s
brunovcosta•2d ago
Amazing! hahaha.. Tip: Arrows + Z (shoot)
kibbi•2d ago
In my case, Z for shooting works only rarely. Usually nothing happens. How does the game code query the key?
iku•2d ago
i think, after shooting, it has to recharge. When recharged, the ship momentarily blink-expands in yellow. This means it is ready to fire again.

But sometimes, i been left without a recharge, and without shooting, and I don't know why.

brunovcosta•2d ago
exactly as @iku commented.. there is a cooldown time between shots.. it pulses when you're ready and resets when you try before it's loaded!

It seems that I should add a better visual feedback haha

cmrdporcupine•2d ago
"We stream drawing primitives instead of heavy pixels or complex state objects."

This is cool ... but I suspect just pushing video frames like Stadia etc did is just as efficient these days and a lot less complicated to implement and no special client really needed. Decent compression, and hardware decode on almost every machine, hardware encode possible on the server side, and excellent browser support.

MarsIronPI•2d ago
On the other hand, you could take a list of primitives from, say, the JS Canvas API, come up with a format that can encode all of them and use that as your protocol. Bam, with that you get one client for any game that uses JS Canvas.
brunovcosta•2d ago
That's exactly my approach! I'm sending canvas commands instead of pixels, which makes things faster

That said.. I don't think stadia could do that since it's not opinionated about the game engine. Unless they go really deep on the graphics card instructions instead, but then it becomes comparable to pixel rendering I guess

fionic•2d ago
Cool! Besides the productizing or making a framework, I’m trying to understand if this is different than the elementary idea (which probably every game dev who worked on game networking has tinkered with) of sending inputs to the server and then sending player positions back to all the clients…? I think even smaller footprint would be position: two or three floats x,y(,z) instead of shapes too? Anyway this is always fine for very low latency environments where client side prediction, lag comp etc would not be required. Thanks for sharing, I might give it a try! sorry if I’m missing something.
brunovcosta•2d ago
You're correct

My approach lives in some place between video streaming and data streaming in terms of performance

It's not intended to be faster than a proper client that brings a lot of logic and information that diminish the amount of information required to be transfered

My proof of concept is more about: Can my dev exp be much better without relying on the video streaming approach? (which is havier)

kibbi•2d ago
Interesting approach! I've thought about a similar method after reading about the PLATO platform.

When playing astro‑maze, the delay is noticeable, and in a 2D action game such delays are especially apparent. Games that don’t rely on tight real‑time input might perform better. (I'm connecting from Europe, though.)

If you add support for drawing from images (such as spritesheets or tilesheets) in the future, and the client stores those images and sounds locally, the entire screen could be drawn from these assets, so no pixel data would need to be transferred, only commands like "draw tile 56 at position (x, y)."

(By the way, opening abstra.io in a German-language browser leads to https://www.abstra.io/deundefined which shows a 404 error.)

brunovcosta•2d ago
Yeah.. As people are playing and I'm watching their feedbacks it is becoming clear to me that the main source of input delay comes from the distance to the server.. the whole game is running in a single machine in SFO, so it makes total sense this bad exp in europe

I think this is inevitable unless I add some optimism/interpolation in the client

Also, thanks for the feedback! I will fix the Abstra landing page

try https://www.abstra.io/en instead

ModernMech•2d ago
You're running this at the airport?
fragmede•2d ago
It's a Googleism. The datacenter is referred to by the nearest airport because airport codes make it straightforwardsto know roughly where the DC is.
Bender•2d ago
For completeness sake this is not strictly a Google thing. Many companies use airport codes for their data-centers.
allthatineed•2d ago
BYOND/Space Station 13 is built upon this model.

Sprite sheets are png with ztxt blocks with meta/frame info and a list of drawing operations to be done to construct vsprites based on any runtime server side operations done on the sprites.

There is limited client coding via popup Web view windows and a few js apis back to the client but nothing you can build too much off of.

(SS14 brings this model to an open source c# framework called The Robust Engine but has some limitations related to maintainer power tripping over who should be allowed to use their open source project.)

brunovcosta•2d ago
Amazing! Never heard of this byond/ss13/14

Thank you for the reference!

Thaxll•2d ago
"Impossible to Cheat"

Let me tell you that there is cheating in cloud rendering solution ( Stadia, AWS Luna ect ... )

So 100% there is cheating in your solution.

It's trivial to read the screen.

brunovcosta•2d ago
You're right

Especially with today's computer vision

The cheating I'm more protected (just as stadia, etc..) is regarded to client/code exploitation

which we don't have to worry about in this approach

6r17•2d ago
Cheating with AI will be possible even with server side rendering ; nvidia has released models able to learn to play - it's going to be very difficult to detect whether it's an AI or a human ; very impressive however
nkrisc•2d ago
That’s a very different kind of cheating though. The kind of cheating this effectively makes impossible is cheating where a player has more information than they’re intended to have.

If someone makes an AI that plays the game as a good player, then it’s effectively indistinguishable from a real player who is good. If they make it super-humanly good, then it would probably be detectable anyway.

It’s still fair in the sense that all players have the same (intended) information per the game rules.

throwaway894345•2d ago
I’m also curious if an AI could process the screen feed quickly enough to compete in first-person shooter games. Seems like it would be difficult without extremely high end hardware for the foreseeable future?
Thaxll•2d ago
It already exists.
ModernMech•2d ago
I had students build this kind of thing in 2020 by screenshotting the game and processing it with a standard OpenCV pipeline. No GenAI needed.
throwaway894345•2d ago
Thank you for educating me. How does OpenCV work from the perspective of recognizing things in an image? Is there some kind of underlying model there that learns what a target looks like or not?
ModernMech•2d ago
The way they did it, they were writing an aimbot. So the pipeline was:

- take a screenshot

- run massive skeletal detection on it to get the skeletons of any humanoids

- of those skeletons, pick a target closest to the player

- for that target, get coordinates of head node

- run a PID to control the cursor to where the head node is located

- move the camera one frame, repeat the whole process. If you can fit that pipeline to 16ms it can run in real time.

throwaway894345•22h ago
Wow, that's fascinating. Were they able to fit the whole thing inside the 16ms frame?
ModernMech•20h ago
oh yeah, with little problem, especially with a GPU it's not hard at all.
Scion9066•2d ago
There's already models specifically for things like identifying players in Counter-Strike 2, including which team they're on.

Someone has even rigged up a system like that to a TENS system to stimulate the nerves in their arm and hand to move the mouse in the correct direction and fire when the crosshair is over the enemy.

We are definitely already there.

fragmede•2d ago
They documented it on YouTube for us to see:

https://youtu.be/9alJwQG-Wbk

holy shit that's amazing.

ASalazarMX•2d ago
If there's enough players, the server could segregate them in tiers. All superhuman players would eventually end up competing between themselves.
tnelsond4•2d ago
Since you're doing this is rust, try experimenting to see what would happen if you did zstd compression using a dictionary on the data you're sending back and forth, it might give you a performance benefit.
brunovcosta•2d ago
I will definitely try it!

I'm using Gzip since it comes with all browsers hence a easy approach

That said, I will find som zstd decompressor for js/wasm and try!

edit:

I just added and the difference was huge! Thank you!

RodgerTheGreat•2d ago
I had a similar idea about 13 years ago, but I didn't extend it into a generalized game engine: https://github.com/JohnEarnest/VectorLand
aetherspawn•2d ago
The latency is a little intense from Australia … but surprisingly not as bad as I thought it would be.

It was playable.

I wonder if you can use speculative execution to play the game a few frames ahead and then the client picks what to display based on user input, or something like that.

Each frame is 16ms, so you’d have to work ahead 6 frames to conquer the nominal latency of around 100ms, which may actually be 200ms round trip.

(In that case, something like Haskell would be a good candidate to build a DSL to build the decision tree to send to the JS client…)

Neywiny•2d ago
It could help visually but you'll still have 200ms between you and your next door neighbor's actions
lurkshark•2d ago
What you’re describing is called “rollback netcode”. It’s a pretty cool chunk of theory, usually used for fighting games which are extremely sensitive to latency. This explainer has some nice graphic demos

https://bymuno.com/post/rollback

dustbunny•2d ago
It's a common misconception that this is only used in fighting games. This technique was developed first in Duke Nukem, and then exploited heavily by Carmack in Quake, and subsequently refined and built upon in other AAA FPS games, specifically for the local player movement and shooting.
ThatPlayer•2d ago
I don't think it's quite the same. Rollback netcode is like lockstep netcode, where the entire game is simulated locally and only inputs are networked. Since it's still only input being networked, network drops (or slow computers) affect everyone, requiring the simulation to slow down. Not just fighting games, but RTS games would do this. If you've ever played Starcraft/Warcraft 3 where it would freeze when a player disconnected.

With rollback/lockstep, there's no need for a server simulation at all. Most games are not doing that: the client's local simulations are less important than the server's simulation, even missing information (good to prevent wallhacks). Any dropped packets are handled with the server telling the client the exact positions of everything, leading to warping. Dropped packets and latency also only affect the problem player, rather than pausing everyone's simulations.

aetherspawn•2d ago
This is awesome and exactly what it needs, but good luck creating a language that’s “signal driven” enough to encode it and then send all the possible states to the client.

If you were able to make it, it would be kind of a Hail Mary moment for making easy server games without the latency.

ingen0s•2d ago
Might be the greatest thing I have seen made in like 10 years
ftgffsdddr•2d ago
Is the source code available?
efilife•22h ago
Doesn't seem like it
emmelaich•2d ago
Reminds me of the cave X11 games. For game play I'd suggest slowing it way down.
brunovcosta•2d ago
good feedback! I'm seeing people really struggly with the control lag + speed.

I'm always biased since I test locally with no delay when developing :)

LoganDark•2d ago
are there any affordances for prediction or replay? you could try to help network latency by having the server resimulate a small period of time roughly equivalent to the client's network delay - it's not perfect without client-side prediction but it could help
brunovcosta•2d ago
It's possible, but harder than traditional client/server paradigm since the client here is generic so the predictablity should be based on something other than heuristics

I'm thinking about simple ML to predict inputs and feedbacks. Since the amount of data generated in the streaming is massive and well structured, it looks like a possible approach

Matheus28•2d ago
Client-server multiplayer games are already kind of a very specialized type of video playback if you squint a bit (you're transmitting entities rather than pixels).

This method of multiplayer you propose is inferior in basically every way: you can't do client-side prediction to make inputs feel smoother, and non-trivial scenes will surely take up more bandwidth than just transmitting entity deltas.

ycombinatrix•2d ago
Is this how Stadia was supposed to work?
brunovcosta•2d ago
Not exactly, since they aren't opinionated in the game engine, they can't control what "primitives" are being used to render.. they probably just encode video
ycombinatrix•2d ago
I guess you're right, all their talk about working with game developers was probably just to get the games working on their hardware.
modinfo•2d ago
Im just vibe-coded a multiplayer game with deterministic terrain world generation with Cleoselene in 5min.

https://github.com/skorotkiewicz/proximity-explorer

brunovcosta•2d ago
That's AMAZING!

I will definitively follow that!

efilife•22h ago
Was it really 5 min or more like 30?
2001zhaozhao•1d ago
> - Streaming game primitives instead of pixels, which should be much lighter - Server-side rendering makes cheating basically impossible

This doesn't work in 3D. Unless you have the server do the work of the GPU and compute occlusion, you'll end up sending data to the client that they shouldn't be able to have (e.g. location of players and other objects behind walls)

nkrisc•1d ago
Theoretically couldn’t you send each client its data after performing occlusion culling based on each one’s camera?

Don’t some competitive games more or less already do this? Not sending player A’s position to player B if the server determines player A is occluded from player B?

I seem to recall a blog post about Apex Legenda dealing with the issue of “leaker’s advantage” due to this type of culling, unless I’m just totally misremembering the subject.

Regardless, seems like it would work just fine even in 3D for the types of games where everyone has the same view more or less.