frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
576•klaussilveira•10h ago•167 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
889•xnx•16h ago•540 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
91•matheusalmeida•1d ago•20 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
18•helloplanets•4d ago•9 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
21•videotopia•3d ago•0 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
197•isitcontent•11h ago•24 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
199•dmpetrov•11h ago•90 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
307•vecti•13h ago•136 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
352•aktau•17h ago•175 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
350•ostacke•17h ago•91 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
452•todsacerdoti•18h ago•228 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
20•romes•4d ago•2 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
79•quibono•4d ago•17 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
52•kmm•4d ago•3 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
253•eljojo•13h ago•153 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
388•lstoll•17h ago•263 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
5•bikenaga•3d ago•1 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
230•i5heu•13h ago•174 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
12•neogoose•3h ago•7 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
24•gmays•6h ago•5 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
68•phreda4•10h ago•12 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
116•SerCe•7h ago•94 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
135•vmatsiiako•16h ago•59 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
268•surprisetalk•3d ago•36 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
42•gfortaine•8h ago•13 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
168•limoce•3d ago•87 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1039•cdrnsf•20h ago•431 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
60•rescrv•18h ago•22 comments

Show HN: ARM64 Android Dev Kit

https://github.com/denuoweb/ARM64-ADK
14•denuoweb•1d ago•2 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
88•antves•1d ago•63 comments
Open in hackernews

Qwen-Image-Layered: transparency and layer aware open diffusion model

https://huggingface.co/papers/2512.15603
130•dvrp•1mo ago

Comments

dvrp•1mo ago
Qwen-Image-Layered is a diffusion model that, unlike most SOTA-ish models out there (e.g. Flux, Krea 1, ChatGPT, Qwen-Image) it's (1) open-weight (unlike ChatGPT Image or Nano Banana) and Apache 2.0; and has 2 distinct inference-time features: (i) it's able to understand the alpha channel of images (RGBA, as opposed to RGB only) which makes it able to generate transparency-aware bitmaps; and (ii), it's able to understand layers [1]—this is how most creative professionals work in software like Photoshop or Figma, where you overlay elements into a single file, such as a foreground and a background.

This is the first model by a main AI research lab (the people behind Qwen Image, which is basically the SOTA open image diffusion model) with those capabilities afaik.

The difference in timing for this submission (16 hours ago) is because that's when the research/academic paper got released—as opposed to the inference code and model weights, which just got released 5 hours ago.

---

Technically there's another difference, but this mostly matters for people who are interested in AI research or AI training. From their abstract: “[we introduce] a Multi-stage Training strategy to adapt a pretrained image generation model into a multilayer image decomposer.” which seems to imply that you can adapt a current (but different) image model to understand layers as well, as well as a pipeline to obtain the data from Photoshop .PSD files.

dvrp•1mo ago
See also:

- Paper page: https://huggingface.co/papers/2512.15603

- Model page: https://huggingface.co/Qwen/Qwen-Image-Layered

- Quantized model page: https://huggingface.co/QuantStack/Qwen-Image-Layered-GGUF

- Blog URL: https://qwenlm.github.io/blog/qwen-image-layered/ (404 at the time of writing this comment, but it'll probably release soon)

- GitHub page: https://github.com/QwenLM/Qwen-Image-Layered

smusamashah•1mo ago
Article link https://qwen.ai/blog?id=qwen-image-layered
SV_BubbleTime•1mo ago
I’m still not clear if it’s going to deliver the unique layers to you?

If you set a variable layers of 5 for example will it determine what is on each layer, or do I need to prompt that?

And I assume you need enough VRAM because each layer will be effectively a whole image in pixel or latent space… so if I have a 1MP image, and 5 layers I would likely need to be able to fit a 5MP image in VRAM?

Or if this can be multiple steps, where I wouldn’t need all 5 layers in active VRAM, that the assembly is another step at the end after generating on one layer?

jamilton•1mo ago
The linked GitHub readme says it outputs a powerpoint file of the layers.
Llamamoe•1mo ago
...of all the possible formats, it outputs.. a powerpoint presentation..? What.
djfobbz•1mo ago
Lol, right?!?! I would've expected sequential PNGs followed by SVGs once the model improved.
CamperBob2•1mo ago
That's what the example code at https://old.reddit.com/r/StableDiffusion/comments/1pqnghp/qw... generates. You get 0.png, 1.png ... n.png, where n= the requested number of layers-1.

It'll drop a 600W RTX 6000 to its knees for about a minute, but it does work.

dvrp•1mo ago
I saw some people at a company called Pruna AI got it down to 8 seconds with Cloudflare/Replicate, but I don't know if it was on consumer hardware or an A100/H100/H200, and I don't know if the inference optimization is open-source yet.
dragonwriter•1mo ago
The github repo includes (among other things) a script (relying on python-pptx) to output decomposed layer images into a pptx file “where you can edit and move these layers flexibly.” (I've never user Powerpoint for this, but maybe it is good enough for this and ubiquitous enough that this is sensible?)
oefrha•1mo ago
I don't see the word powerpoint anywhere in https://github.com/QwenLM/Qwen-Image-Layered, I only see a code snippet saving a bunch of PNGs:

  with torch.inference_mode():
      output = pipeline(**inputs)
      output_image = output.images[0]
  
  for i, image in enumerate(output_image):
      image.save(f"{i}.png")
Unless it's a joke that went over my head or you're talking about some other GitHub readme (there's only one GitHub link in TFA), posting an outright lie like this is not cool.
dragonwriter•1mo ago
> I don't see the word powerpoint anywhere in https://github.com/QwenLM/Qwen-Image-Layered,

The word "powerpoint" is not there, however this text is:

“The following scripts will start a Gradio-based web interface where you can decompose an image and export the layers into a pptx file, where you can edit and move these layers flexibly.”

oefrha•1mo ago
Oh okay I missed it, sorry. But that’s just using a separate python-pptx package to export the generated list of images to a .pptx file, not something inherent to the model.
ThrowawayTestr•1mo ago
Anyone have a good workflow for combining images in comfyui? I could never get it to work.
firenode•1mo ago
Did you try Civitai workflow? I also failed.
ThrowawayTestr•1mo ago
I tried a few workflows I got from civitai
firenode•1mo ago
any workflow on this? Civitai workflow doesn't work.
BimJeam•1mo ago
Woah. This is gross. Need to test that.
Alifatisk•1mo ago
It's incredible how much the Qwen team is pushing out in this field
joshstrange•1mo ago
One of the most valuable things about code generation from LLMs is the ability to edit it, you have all the pieces and can tweak them after the fact. Same with normal generated text. Images, on the other hand, are much harder to modify and the times when you might want text or other “layers” is specifically where they fall apart in my experience. You might get exactly the person/place/thing rendered but the additions to the image aren’t right but it’s nearly impossible to change just the additions without losing at least some of the other image/images.

I’ve often thought “I wish I could describe what I want in Pixelmator and have it create a whole document with multiple layers that I can go back in and tweak as needed”.

Bombthecat•1mo ago
Yep! Wrote it already on discord: this the first step of further integrating and making use of humans.

I think the future is something like: start draft. Turn draft into image with AI refine the boring layers. Edit the important layer.