frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
612•klaussilveira•12h ago•180 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
915•xnx•17h ago•545 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
29•helloplanets•4d ago•22 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
102•matheusalmeida•1d ago•24 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
36•videotopia•4d ago•1 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
212•isitcontent•12h ago•25 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
5•kaonwarb•3d ago•1 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
206•dmpetrov•12h ago•101 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
316•vecti•14h ago•140 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
355•aktau•18h ago•181 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
361•ostacke•18h ago•94 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
471•todsacerdoti•20h ago•232 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
267•eljojo•15h ago•157 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
400•lstoll•18h ago•271 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
25•romes•4d ago•3 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
82•quibono•4d ago•20 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
54•kmm•4d ago•3 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
9•bikenaga•3d ago•2 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
242•i5heu•15h ago•183 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
51•gfortaine•10h ago•16 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
138•vmatsiiako•17h ago•60 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
275•surprisetalk•3d ago•37 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
68•phreda4•11h ago•13 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1052•cdrnsf•21h ago•433 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
127•SerCe•8h ago•111 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
28•gmays•7h ago•10 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
173•limoce•3d ago•93 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
7•jesperordrup•2h ago•4 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
61•rescrv•20h ago•22 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
17•neogoose•4h ago•9 comments
Open in hackernews

Show HN: Fixing Google Nano Banana Pixel Art with Rust

https://github.com/Hugo-Dz/spritefusion-pixel-snapper
188•HugoDz•2mo ago

Comments

threeducks•2mo ago
Could you explain a bit how the code works? For example, how does it detect the correct pixel size and how does it find out how to color the (potentially misaligned) pixels?
razster•2mo ago
That's an actually nice setup. Have you looked at Z-Image and the Pixel LoRA that was released? I've found it works fairly well at keeping the pixels matched with the grid.
vunderba•2mo ago
The Z-image turbo model is pretty heavily distilled. I can't imagine using it for any marginally complicated prompts.

Are you talking about the LoRA by LuisaP?

Somewhat ironically, that LoRA's showcase images themselves exhibit the exact issues (non-square pixels, much higher color depth than pixel art, etc) that stuff like this project / unfake.js / etc. are designed to fix.

https://imgur.com/a/vfvARkt

cipehr•2mo ago
Is it possible that some of the reason pixels are messed up is because of the watermarking? https://deepmind.google/models/synthid/

Or is it purely because the models just don't understand pixel art?

29athrowaway•2mo ago
They also don't understand spritesheets.
skavi•2mo ago
I wonder if this would be a simple (limited) example of defeating the watermarking? Surely there's no way SynthID is persisting in what is now a handful of pixels.
cipehr•2mo ago
Agree with you! I wonder that myself, also minor unperceetable pixel color differences can be corrected as well i'd guess?
vunderba•2mo ago
Nice. There's a couple of these (unfake which uses pixel snapping/palette reduction, sd-palettize which uses k-means to palette reduce, etc.) that I've used in the past in a Stable Diffusion -> Pixel Art pipeline.

I think it'd be worth calling out the differences.

[1] - https://github.com/jenissimo/unfake.js

[2] - https://github.com/Astropulse/sd-palettize

29athrowaway•2mo ago
Another annoyance of Nano Banana (and its Pro version) is that it cannot generate transparent pixels. When it wants to, it creates a hallucinated checkerboard background that makes it worse.
vunderba•2mo ago
Yep. Your best bet is to ask for "solid white/black background" and then feed it into something like rembg [1]. It's an extra step but it'll get you partly there.

On the OpenAI side, the gpt-image-1 model has actually had the ability to produce true alpha transparent images for a while now. Too bad quality-wise they're lagging pretty badly behind other models.

[1] - https://github.com/danielgatis/rembg

SXX•2mo ago
Ask it for just white background. Works good for both art and to-be-3d-models.
jasonjmcghee•2mo ago
I can't explain it, but it's like uncanny valley pixel art. Like the artist hasn't done the final polish pass maybe?

Maybe it's the inconsistent lights/shadows?

Maybe a pixel artist has the proper words to explain the issues

SXX•2mo ago
Not pixel artist, but game dev working with pixel art:

1 - AI just try to compress too many details into so few pixels.

When artists create pixel art they usually add details along the way and only important ones because otherwise it will look like rubbish on some screens.

Also it's easier to e.g add different hats or heads or weapons on the same body. AI generated ones is always too unique.

2 - AI try to mimic realistic poses that look like art supposed to be animated in 3D.

For a real game if you make lets say isometric tactical game you'll never make tiles larger than 64x64 because of how much labour they will take to animate. Each animation at 8fps take hours of work.

So pixel art is usually either high-fidelity and static or low-fi and animated in very basic ways.

smusamashah•2mo ago
The skeleton has issues, floor tiles are very inconsistent for example. I haven't looked more carefully. We probably notice something wrong subconsciously but it takes time to point those out.

Generated pixel art for now is 80-90% done state. To use them in prod, issues should be fixed which seems to be the palette and some semantic issues. If you only generate small parts of the big picture with AI, it will be perfectly usable.

doctorpangloss•2mo ago
The borders of shapes are all wrong. It’s not too complicated. There is a small vocabulary of valid border patterns (e.g. a line rising one pixel up and two pixels right) that none of these generative models adhere to.
krisoft•2mo ago
It feels weird to me that on the before/after comparision they felt the need to zoom in on the “before” but not on the “after”.

Either both should have the magnifying glass or neither. This just makes it hard to see the difference.

thih9•2mo ago
There are more details in the fixed version too, e.g. an extra detailed dark line within right leg (tibia) that is not present in the original; where do these details come from?
im3w1l•2mo ago
The purpose of zoomed out comparison is to show the quality reduction of applying this tool. The purpose of zoomed in before picture is to show how a typical pixel misalignment. Aligned pixels can be easily imagined.
krisoft•2mo ago
> The purpose of zoomed out comparison is to show the quality reduction of applying this tool.

Reduction? Shouldn't the tool be improving the quality of the image? If it is reducing the quality then why do it?

> The purpose of zoomed in before picture is to show how a typical pixel misalignment.

Okay, but how does this supposed "misalignment" look on the picture? Would I even notice it? If not, does it matter? Did they just zoom in, and draw a misaligned grid over the zoomed in image? Or the grid fault lines are visible in the gestalt?

> Aligned pixels can be easily imagined.

Everything can be easily imagined. Misaligned pixels can be imagined. They could just write "our processed images look better" and let me imagine how much nicer they are. The purpose of a comparison is to prove that they are nicer/better/crisper whatever they want to claim.

coldtea•2mo ago
>Okay, but how does this supposed "misalignment" look on the picture?

People who are the target audience for this tool already know.

>Would I even notice it?

Yes.

>The purpose of a comparison is to prove that they are nicer/better/crisper whatever they want to claim.

They don't need to prove it to their target users. They already know the problem (for which several tools exist).

im3w1l•2mo ago
The way I see it, converting something to pixel art is akin to lossy compression or quantization. The goal is to retain as much detail as possible given the constraints.

The exact way that pixels are misaligned is a feature of the specific AI models that generated the almost-pixel art.

westoque•2mo ago
> Current AI image models can't understand grid-based pixel art.

sounds like a good use case to fix this problem from the model layer. an image gen model that is trained to make pixel perfect art.

lxgr•2mo ago
I'd love this, but for removing "transparent background" checkerboards.

Nano Banana beats it on many other dimensions, but this is one thing that gpt-image-1 usually does much better.

LorenDB•2mo ago
How is the "with Rust" part relevant?
Svoka•2mo ago
I guess writing something in Rust is cool. I believe that wanting to be cool is a fundamental human desire.
dymk•2mo ago
this is a site where people discuss programming languages and tools

rust is a programming language

people interested in rust may find a tool written in rust relevant to their interests where they otherwise might not

Zecc•2mo ago
For what it's worth, it's what caught my attention. I wouldn't have found it so captivating if it had only said "Fixing Google Nano Banana Pixel Art". To be clear, it's not because of Rust in particular. It would have been the same if it said "with C#", or "with Python", or even just "programmatically". And on that note: I feel disappointed. I thought I would be reading about the development process, and not just a product presentation.
baq•2mo ago
As a Rust fan I consider this a very valid question. Rust projects should be able to defend their worth without piggybacking onto the love Rust receives from programmers anymore. ‘Not written in js/ts/golang/python’ works for me, too, but it’s a mouthful.
IgorPartola•2mo ago
This is perfect! I have had such a time with Nano Banana asking it to generate some very simple pixel art. One of the worst things is that it cannot seem to generate transparent backgrounds or even solid ones. It’s always some blotchy cloud of off-white pixels or a simulated fuzzy grid that shows up in some places. I will need to give this a try to clean up some of what I had to try by hand.
forgotoldacc•2mo ago
I simply cannot understand people who'll spend forever trying to get AI to generate basic art that any amateur with a bit of practice could do in a minute.
IgorPartola•2mo ago
I am terrible at this kind of art. I could find another amateur but the "REPL" for that is just too slow for prototyping. No it isn't perfect, but tools like this make it better, and it means that I can generate something in an hour of my time rather than spending many hours finding and interfacing with another amateur or professional. Plus the cost is better. While generating one really high quality asset is almost certainly better with a pro, generating three dozen prototypes to choose from isn't.
jtfrench•2mo ago
Go Hugo!
andai•2mo ago
At last! I have been dreaming about such a tool for years. I often find pixel art that has been scaled or poorly compressed. So it's a bunch of fuzzy squares. Can't wait to try this.
badmonster•2mo ago
What was the specific pixel art problem with Google's Nano Banana that this Rust project solved?