frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Same Surface, Different Weight

https://www.robpanico.com/articles/display/?entry_short=same-surface-different-weight
1•retrocog•57s ago•0 comments

The Rise of Spec Driven Development

https://www.dbreunig.com/2026/02/06/the-rise-of-spec-driven-development.html
1•Brajeshwar•5m ago•0 comments

The first good Raspberry Pi Laptop

https://www.jeffgeerling.com/blog/2026/the-first-good-raspberry-pi-laptop/
2•Brajeshwar•5m ago•0 comments

Seas to Rise Around the World – But Not in Greenland

https://e360.yale.edu/digest/greenland-sea-levels-fall
1•Brajeshwar•5m ago•0 comments

Will Future Generations Think We're Gross?

https://chillphysicsenjoyer.substack.com/p/will-future-generations-think-were
1•crescit_eundo•8m ago•0 comments

State Department will delete Xitter posts from before Trump returned to office

https://www.npr.org/2026/02/07/nx-s1-5704785/state-department-trump-posts-x
2•righthand•11m ago•0 comments

Show HN: Verifiable server roundtrip demo for a decision interruption system

https://github.com/veeduzyl-hue/decision-assistant-roundtrip-demo
1•veeduzyl•12m ago•0 comments

Impl Rust – Avro IDL Tool in Rust via Antlr

https://www.youtube.com/watch?v=vmKvw73V394
1•todsacerdoti•12m ago•0 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
2•vinhnx•13m ago•0 comments

minikeyvalue

https://github.com/commaai/minikeyvalue/tree/prod
3•tosh•18m ago•0 comments

Neomacs: GPU-accelerated Emacs with inline video, WebKit, and terminal via wgpu

https://github.com/eval-exec/neomacs
1•evalexec•23m ago•0 comments

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
2•ShinyaKoyano•27m ago•1 comments

How I grow my X presence?

https://www.reddit.com/r/GrowthHacking/s/UEc8pAl61b
2•m00dy•28m ago•0 comments

What's the cost of the most expensive Super Bowl ad slot?

https://ballparkguess.com/?id=5b98b1d3-5887-47b9-8a92-43be2ced674b
1•bkls•29m ago•0 comments

What if you just did a startup instead?

https://alexaraki.substack.com/p/what-if-you-just-did-a-startup
5•okaywriting•36m ago•0 comments

Hacking up your own shell completion (2020)

https://www.feltrac.co/environment/2020/01/18/build-your-own-shell-completion.html
2•todsacerdoti•38m ago•0 comments

Show HN: Gorse 0.5 – Open-source recommender system with visual workflow editor

https://github.com/gorse-io/gorse
1•zhenghaoz•39m ago•0 comments

GLM-OCR: Accurate × Fast × Comprehensive

https://github.com/zai-org/GLM-OCR
1•ms7892•40m ago•0 comments

Local Agent Bench: Test 11 small LLMs on tool-calling judgment, on CPU, no GPU

https://github.com/MikeVeerman/tool-calling-benchmark
1•MikeVeerman•41m ago•0 comments

Show HN: AboutMyProject – A public log for developer proof-of-work

https://aboutmyproject.com/
1•Raiplus•41m ago•0 comments

Expertise, AI and Work of Future [video]

https://www.youtube.com/watch?v=wsxWl9iT1XU
1•indiantinker•42m ago•0 comments

So Long to Cheap Books You Could Fit in Your Pocket

https://www.nytimes.com/2026/02/06/books/mass-market-paperback-books.html
3•pseudolus•42m ago•1 comments

PID Controller

https://en.wikipedia.org/wiki/Proportional%E2%80%93integral%E2%80%93derivative_controller
1•tosh•46m ago•0 comments

SpaceX Rocket Generates 100GW of Power, or 20% of US Electricity

https://twitter.com/AlecStapp/status/2019932764515234159
2•bkls•46m ago•0 comments

Kubernetes MCP Server

https://github.com/yindia/rootcause
1•yindia•47m ago•0 comments

I Built a Movie Recommendation Agent to Solve Movie Nights with My Wife

https://rokn.io/posts/building-movie-recommendation-agent
4•roknovosel•47m ago•0 comments

What were the first animals? The fierce sponge–jelly battle that just won't end

https://www.nature.com/articles/d41586-026-00238-z
2•beardyw•56m ago•0 comments

Sidestepping Evaluation Awareness and Anticipating Misalignment

https://alignment.openai.com/prod-evals/
1•taubek•56m ago•0 comments

OldMapsOnline

https://www.oldmapsonline.org/en
2•surprisetalk•58m ago•0 comments

What It's Like to Be a Worm

https://www.asimov.press/p/sentience
2•surprisetalk•58m ago•0 comments
Open in hackernews

Z-Image: Powerful and highly efficient image generation model with 6B parameters

https://github.com/Tongyi-MAI/Z-Image
398•doener•2mo ago

Comments

Copenjin•2mo ago
Very good, not always perfect with text or with following exactly the prompt, but 6B so... impressive.
accrual•2mo ago
I have had good textual results with the Turbo version so far. Sometimes it drops a letter in the output, but most of the time it adheres well to both the text requested and the style.

I tried this prompt on my username: "A painted UFO abducts the graffiti text "Accrual" painted on the side of a rusty bridge."

Results: https://imgur.com/a/z-image-test-hL1ACLd

pawelduda•2mo ago
Did anyone test it on 5090? I saw some 30xx reports and it seemed very fast
Wowfunhappy•2mo ago
Even on my 4080 it's extremely fast, it takes ~15 seconds per image.
accrual•2mo ago
Did you use PyTorch Native or Diffusers Inference? I couldn't get the former working yet so I used Diffusers, but it's terribly slow on my 4080 (4 min/image). Trying again with PyTorch now, seems like Diffusers is expected to be slow.
Wowfunhappy•2mo ago
Uh, not sure? I downloaded the portable build of ComfyUI and ran the CUDA-specific batch file it comes with.

(I'm not used to using Windows and I don't know how to do anything complicated on that OS. Unfortunately, the computer with the big GPU also runs Windows.)

accrual•2mo ago
Haha, I know how it goes. Thanks, I'll give that a try!

Update: works great and much faster via ComfyUI + the provided workflow file.

egeres•2mo ago
Incredibly fast, on my 5090 with CUDA 13 (& the latest diffusers, xformers, transformers, etc...), 9 samplig steps and the "Tongyi-MAI/Z-Image-Turbo" model I get:

- 1.5s to generate an image at 512x512

- 3.5s to generate an image at 1024x1024

- 26.s to generate an image at 2048x2048

It uses almost all the 32Gb Gb of VRAM and GPU usage. I'm using the script from the HF post: https://huggingface.co/Tongyi-MAI/Z-Image-Turbo

SV_BubbleTime•2mo ago
Weird, even at 2048 I don’t think it should be using all your 32GB VRAM.
egeres•2mo ago
It stays around 26Gb at 512x512. I still haven't profiled the execution or looked much into the details of the architecture but I would assume it trades off memory for speed by creating caches for each inference step
SV_BubbleTime•2mo ago
IDK. Seems odd. It’s an 11GB model, I don’t know what it could caching in ram.
danielbln•2mo ago
We've come a long way with these image models, and the things you can do with paltry 6B are super impressive. The community has adopted this model wholesale, and left Flux(2) by the way side. It helps that Z-Image isn't censored, whereas BFL (makers of Flux 2) dedicated like a fith of their press release talking about how "safe" (read: censored and lobotomized) their model is.
rfoo•2mo ago
But this is a CCP model, would it refuse to generate Xi?
vunderba•2mo ago
You tell me.

https://imgur.com/a/7FR3uT1

CamperBob2•2mo ago
It will generate anything. Xi/Pooh porn, Taylor Swift getting squashed by a tank at Tiananmen Square, whatever, no censorship at all.

With simplistic prompts, you quickly conclude that the small model size is the only limitation. Once you realize how good it is with detailed prompts, though, you find that you can get a lot more diversity out of it than you initially thought you could.

Absolute game-changer of a model IMO. It is competitive with Nano Banana Pro in some respects, and that's saying something.

cubefox•2mo ago
I could imagine the Chinese government is not terribly interested in enforcing its censorship laws when this would conflict with boosting Chinese AI. Overregulation can be a significant inhibitor to innovation and competitiveness, as we often see in Europe.
CamperBob2•2mo ago
I'm sure they're also aware that few of their own citizens are in a position to run the model themselves, and that it's easy enough to use the system prompt to censor hosted copies for domestic consumption.

Censoring open-source models really doesn't make a lot of sense for China. Which could also be why local Deepseek instances are relatively easy to jailbreak.

AuryGlenz•2mo ago
To be fair, a lot of that was about their online service and not the model itself. It can definitely generate breasts.

That said I do find the focus on “safety” tiring.

ForOldHack•2mo ago
Explain lobotomizing a Image Generator? Modern problems require modern terms.
SV_BubbleTime•2mo ago
> whereas BFL (makers of Flux 2) dedicated like a fith of their press release talking about how "safe" (read: censored and lobotomized) their model is.

Agreed, but let’s not confuse what it is. Talking about safety is just “WE WONT EMBARRASS YOU IF YOU INVEST IN US”.

pferdone•2mo ago
It‘s mainly due to system requirements that Flux.2-dev doesn’t get same usage as Z-Image. A 5090 needs about a minute to generate an image with a basic workflow with Flux.2-dev. But prompt adherence and scene/character consistency in edit mode is (way) ahead of Qwen-Edit-2509 if you ask me.
xnx•2mo ago
Z-Image seems to be the first successor to Stable Diffusion 1.5 that delivers better quality, capability, and extensibility across the board in an open model that can feasibly run locally. Excitement is high and an ecosystem is forming fast.
SV_BubbleTime•2mo ago
Did you forget about SDXL?

Clearly you have, but while on the topic, it is amazing to me that only came out 2.5 years ago.

razster•1mo ago
I’ve paired my Z-Image Turbo with SeedVR2 upscale, running on a RTX3060 12gb, 32gb sysMEM, generates in 40sec. I’m holding out for Z-Image Edit that is a larger model, once that is out… going to be interesting. Oh and to train your own ZIT LoRA, takes 5hrs for 3000 steps. So fast.
dragonwriter•1mo ago
Z-Image Base and Z-Image Edit have been announced as being the same size (or, at least, the whole set has been announced as being in the 6B size class) as Turbo, but slower (50 steps with CFG, apparently, from the announced 100 NFEs compared to Turbo's 9 NFEs, where turbo doesn't, in the use they reference, use CFG.)
vunderba•2mo ago
I've done some preliminary testing with Z-Image Turbo in the past week.

Thoughts

- It's fast (~3 seconds on my RTX 4090)

- Surprisingly capable of maintaining image integrity even at high resolutions (1536x1024, sometimes 2048x2048)

- The adherence is impressive for a 6B parameter model

Some tests (2 / 4 passed):

https://imgpb.com/exMoQ

Personally I find it works better as a refiner model downstream of Qwen-Image 20b which has significantly better prompt understanding but has an unnatural "smoothness" to its generated images.

echelon•2mo ago
So does this finally replace SDXL?

Is Flux 1/2/Kontext left in the dust by the Z Image and Qwen combo?

tripplyons•2mo ago
SDXL has been outclassed for a while, especially since Flux came out.
aeon_ai•2mo ago
Subjective. Most in creative industries regularly still use SDXL.

Once Z-image base comes out and some real tuning can be done, I think it has a chance of replacing it for the function SDXL has

Scrapemist•2mo ago
Source?
echelon•2mo ago
Most of the people I know doing local AI prefer SDXL to Flux. Lots of people are still using SDXL, even today.

Flux has largely been met with a collective yawn.

The only thing Flux had going for it was photorealism and prompt adherence. But the skin and jaws of the humans it generated looked weird, it was difficult to fine tune, and the licensing was weird. Furthermore, Flux never had good aesthetics. It always felt plain.

Nobody doing anime or cartoons used Flux. SDXL continues to shine here. People doing photoreal kept using Midjourney.

kouteiheika•2mo ago
> it was difficult to fine tune

Yep. It's pretty difficult to fine tune, mostly because it's a distilled model. You can fine tune it a little bit, but it will quickly collapse and start producing garbage, even though fundamentally it should have been an easier architecture to fine-tune compared to SDXL (since it uses the much more modern flow matching paradigm).

I think that's probably the reason why we never really got any good anime Flux models (at least not as good as they were for SDXL). You just don't have enough leeway to be able to train the model for long enough to make the model great for a domain it's currently suboptimal for without completely collapsing it.

magicalhippo•2mo ago
> It's pretty difficult to fine tune, mostly because it's a distilled model.

What about being distilled makes it harder to fine-tune?

kouteiheika•2mo ago
AFAIK a big part of it is that they distilled the guidance into the model.

I'm going to simplify all of this a lot so please bear with me, but normally the equation to denoise an image would look something like this:

    pos = model(latent, positive_prompt_emb)
    neg = model(latent, negative_prompt_emb)
    next_latent = latent + dt * (neg + cfg_scale * (pos - neg))
So what this does - you trigger the model once with a negative prompt (which can be empty) to get the "starting point" for the prediction, and then you run the model again with a positive prompt to get the direction in which you want to go, and then you combine them.

So, for example, let's assume your positive prompt is "dog", and your negative prompt is empty. So triggering the model with your empty prompt with generate a "neutral" latent, and then you nudge it into the direction of your positive prompt, in the direction of a "dog". And you do this for 20 steps, and you get an image of a dog.

Now, for Flux the equation looks like this:

    next_latent = latent + dt * model(latent, positive_prompt_emb)
The guidance here was distilled into the model. It's cheaper to do inference with, but now we can't really train the model too much without destroying this embedded guidance (the model will just forget it and collapse).

There's also an issue of training dynamics. We don't know exactly how they trained their models, so it's impossible for us to jerry rig our training runs in a similar way. And if you don't match the original training dynamics when finetuning it also negatively affects the model.

So you might ask here - what if we just train the model for a really long time - will it be able to recover? And the answer is - yes, but at this point the most of the original model will essentially be overwritten. People actually did this for Flux Schnell, but you need way more resources to pull it off and the results can be disappointing: https://huggingface.co/lodestones/Chroma

magicalhippo•2mo ago
Thanks for the extended reply, very illuminating. So the core issue is how they distilled it, ie that they "baked in the offset" so to speak.

I did try Chroma and I was quite disappointed, what I got out looked nowhere near as good as what was advertised. Now I have a better understanding why.

echelon•2mo ago
How much would it cost the community to pretrain something with a more modern architecture?

Assuming it was carefully done in stages (more compute) to make sure no mistakes are made?

I suppose we won't need to with the Chinese gifting so much open source recently?

kouteiheika•2mo ago
> How much would it cost the community to pretrain something with a more modern architecture?

Quite a lot. Search for "Chroma" (which was a partial-ish retraining of Flux Schnell) or Pony (which was a partial-ish retraining of SDXL). You're probably looking at a cost of at least tens of thousands or even hundred of thousands of dollars. Even bigger SDXL community finetunes like bigASP cost thousands.

And it's not only the compute that's the issue. You also need a ton of data. You need a big dataset, with millions of images, and you need it cleaned, filtered, and labeled.

And of course you need someone who knows what they're doing. Training these state-of-art models takes quite a bit of skill, especially since a lot of it is pretty much a black art.

dragonwriter•1mo ago
> Search for "Chroma" (which was a partial-ish retraining of Flux Schnell)

Chroma is not simply a "partial-ish" retraining of Schnell, its a retraining of Schnell after rearchitecting part of the model (replacing a 3.3B parameter portion of the model with a 250M parameter replacement with different architecture.)

> You're probably looking at a cost of at least tens of thousands or even hundred of thousands of dollars.

For reference here, Chroma involved 105,000 hours of H100 GPU time [0]. Doing a quick search, $2/hr seems to be about the low end of pricing for H100 time per hour, so hundreds of thousands seems right for that model, and still probably lower for a base model from scratch.

[0] https://www.reddit.com/r/StableDiffusion/comments/1mxwr4e/up...

CuriouslyC•2mo ago
I don't think that's fair. SDXL is crap at composition. It's really good with LoRAs to stylize/inpaint though.
vunderba•2mo ago
Yeah, I've definitely switched largely away from Flux. Much as I do like Flux (for prompt adherency), BFL's baffling licensing structure along with its excessive censorship makes it a noop.

For ref, the Porcupine-cone creature that ZiT couldn't handle by itself in my aforementioned test was easily handled using a Qwen20b + ZiT refiner workflow and even with two separate models STILL runs faster than Flux2 [dev].

https://imgur.com/a/5qYP0Vc

mythz•2mo ago
SDXL has long been surpassed, it's primary redeeming feature is fine tuned variants for different focus and image styles.

IMO HiDream had the best quality OSS generations, Flux Schnell is decent as well. Will try out Z-Image soon.

dragonwriter•1mo ago
> So does this finally replace SDXL?

If Flux Base is easy to train and works as well with multiple LoRAs as SDXL, quite probably, yes, at least as the center of mass for new community efforts.

> Is Flux 1/2/Kontext left in the dust by the Z Image and Qwen combo?

Flux.2 looks like it may be the kind of weights-available editing models, but Qwen is strong here, too (and it remains to be seen if much-lighter Z-Image Edit is a powerhouse here, as well.) But for most local generation tasks, its probably hard to justify the weight of Flux.2 (even though recent improvements in ComfyUI with assistance from NVidia make quantized versions usable on modest consumer systems, its still big and slow.) Add BFLs restrictive licensing on top of the difference in resource requirements for training, and I don't imagine you would see nearly as much community LoRA and finetuning work for Flux.2 as for Z-Image, so the practical difference is likely to be narrower than the difference in base model quality.

amrrs•2mo ago
On fal, it takes less than a second many times.

https://fal.ai/models/fal-ai/z-image/turbo/api

Couple that with the LoRA, in about 3 seconds you can generate completely personalized images.

The speed alone is a big factor but if you put the model side by side with seedream and nanobanana and other models it's definitely in the top 5 and that's killer combo imho.

venusenvy47•2mo ago
I don't know anything about paying for these services, and as a beginner, I worry about running up a huge bill. Do they let you set a limit on how much you pay? I see their pricing examples, but I've never tried one of these.

https://fal.ai/pricing

tethys•2mo ago
It works with prepaid credits, so there should be no risk. Minimum credit amount is $10, though.
vunderba•2mo ago
This. You can also run most (if not all) of the models that Fal.ai directly from the playground tab including Z-Image Turbo.

https://fal.ai/models/fal-ai/z-image/turbo

Bombthecat•2mo ago
For images I like them: https://runware.ai/ super cheap and super fast, they also support Loras and you can upload your own models.

And you work with credits

Bombthecat•2mo ago
Why the down vote? Are they scam?
tarruda•2mo ago
> It's fast (~3 seconds on my RTX 4090)

It is amazing how far behind Apple Silicon is when it comes to use non- language models.

Using the reference code from Z-image on my M1 ultra, it takes 8 seconds per step. Over a minute for the default of 9 steps.

p-e-w•2mo ago
The diffusion process is usually compute-bound, while transformer inference is memory-bound.

Apple Silicon is comparable in memory bandwidth to mid-range GPUs, but it’s light years behind on compute.

tarruda•2mo ago
> but it’s light years behind on compute.

Is that the only factor though? I wonder if pytorch is lacking optimization for the MPS backend.

rfoo•2mo ago
This is the only factor. People sometimes perceive Apple's NPU as "fast" and "amazing" which is simply false.

It's just that NVIDIA GPU sucks (relatively) at *single-user* LLM inference and it makes people feel like Apple not so bad.

tails4e•2mo ago
I heard last year the potential future of gaming is not rendering but fully AI generated frames. 3 seconds per 'frame' now, it's not hard to believe it could do 60fps in a few short years. It makes it seem more likely such a game could exist. I'm not sure I like the idea, but it seems like it could happen
wcoenen•2mo ago
Increasing the framerate by rendering at a lower resolution + upscaling, or outright generation of extra frames has already been a thing for a few years now. NVidia calls it Deep Learning Super Sampling (DLSS)[1]. AMD's equivalent is called FSR[2].

[1] https://en.wikipedia.org/wiki/Deep_Learning_Super_Sampling

[2] https://en.wikipedia.org/wiki/GPUOpen#FidelityFX_Super_Resol...

snek_case•2mo ago
The problem is going to be how to control those models to produce a universe that's temporally and spatially consistent. Also think of other issues such as networked games, how would you even begin to approach that in this new paradigm? You need multiple models to have a shared representation that includes other players. You need to be able to sync data efficiently across the network.

I get that it's tempting to say "we no longer have to program game engines, hurray", but at the same time, we've already done the work, we already have game engines that are relatively very computationally efficient and predictable. We understand graphics and simulation quite well.

Personally: I think there's an obvious future in using AI tools to generate game content. 3D modelling and animation can be very time consuming. If you could get an AI model to generate animated characters, you could save a lot of time. You could also empower a lot of indie devs who don't have 3D modelers to help them. AI tools to generate large maps, also super valuable. Replacing the game engine itself, I think it's a taller order than people realize, and maybe not actually desirable.

adventured•2mo ago
20 years out, what will everybody be using routine 10gbps pipes in our homes for?

I'm paying $43 / month for 500mbps at present and there's nothing special about that at all (in the US or globally). What might we finally use 1gbps+ for? Pulling down massive AI-built worlds of entertainment. Movies & TV streaming sure isn't going to challenge our future bandwidth capabilities.

The worlds are built and shared so quickly in the background that with some slight limitations you never notice the world building going on behind the scenes.

The world building doesn't happen locally. Multiple players connect to the same built world that is remote. There will be smaller hobbyist segments that will still world-build locally for numerous reasons (privacy for one).

The worlds can be constructed entirely before they're downloaded. There are good arguments for both approaches (build the entire world then allow it to be accessed, or attempt to world-build as you play). Both will likely be used over the coming decades, for different reasons and at different times (changes in capabilities will unlock new arguments for either as time goes on, with a likely back and forth where one pulls ahead then the other pulls ahead).

liuliu•2mo ago
Not saying M1 Ultra is great. But you should only get ~8x slow down with proper implementation (such as Draw Things upcoming implementation for Z Image). It should be 2~3 sec per step. On M5 iPad, it is ~6s per step.
nialv7•2mo ago
China really is keeping the open weight/source AI scene alive. If in five years a consumer GPU market still exists it would be because of them.
p-e-w•2mo ago
Pretty sure the consumer GPU market mostly exists because of games, which has nothing to do with China or AI.
samus•2mo ago
The consumer GPU market is not treated as a primary market by GPU makers anymore. Similar to how Micron went B2B-only.
adventured•2mo ago
The parent comment of course understands that. Nvidia views the gaming market as an entry threat, a vector from which a competitor can come after their AI GPU market. That's the reason Nvidia won't be looking to exit the gaming scene no matter how large their AI business gets. If done correctly, staying in the gaming GPU market helps to suppress competition.

Exiting the consumer market is likely a mistake by Micron. If China takes that market segment, they'll eventually take the rest, eliminating most of Micron's value. Holding consumer is about keeping entry attacks covered.

CamperBob2•2mo ago
Exiting the consumer market is likely a mistake by Micron.

I actually think their move to shut down the Crucial channel will prove to be a good one. Why? Because we're heading toward a bimodal distribution of outcomes: either the AI bubble won't pop, and it will pay to prioritize the data center customers, or it will pop. In the latter case a consumer/business-facing RAM manufacturer will have to compete with its own surplus/unused product on scales never seen before.

Worst case scenario for Micron/Crucial, all those warehouses full of wafers that Altman has reserved are going to end up back in the normal RAM marketplace anyway. So why not let him foot the bill for fabbing and storing them in the meantime? Seems that the RAM manufacturers are just trying to make the best of a perilous situation.

gunalx•2mo ago
But why not just keep the consumer brand until stockpiles empty and blame supply issues until things possibly cool down, or people have forgotten the brand at all.
CamperBob2•2mo ago
I imagine the strategy would get out anyway as soon as retailers tried to place their next round of orders. Might as well get out in front of it with a public announcement. AI make line go up, at least for now.
soontimes•2mo ago
If that’s your website please check GitHub link - it has a typo (gitub) and goes to a malicious site
vunderba•2mo ago
Thanks for the heads up. I just checked the site through several browsers and proxying through a VPN. There's no typo and it properly links to:

https://github.com/Tongyi-MAI/Z-Image

Screenshot of site with network tools open to indicate link

https://imgur.com/a/FZDz0K2

EDIT: It's possible that this issue might have existed in an old cached version. I'll purge the cache just to make sure.

rprwhite•2mo ago
The link with the typo is in the footer.
vunderba•2mo ago
Well holy crap - that's been there for about forever! I need a "domain name" spellchecker built into my Gulp CI/CD flow.

EDIT: Fixed! Thanks soontimes and rprwhite!

rendaw•2mo ago
That's 2/4? The kitkat bars look nothing like kitkat bars for the most part (logo? splits? white cream filling?). The DNA armor is made from normal metal links.
vunderba•2mo ago
Fair. Nobody said it was going to surpass Flux.1 Dev (a 12B parameter model) or Qwen-Image (a 20B parameter model) where prompt adherence is strictly concerned.

It's the reason I'm holding off until the Z-Image Base version is released before adding to the official GenAI model comparisons.

But for a 6B model that can generate an image in under 5 seconds, it punches far above its weight class.

As to the passing images, there is white chocolate kit-kat (I know, blasphemy, right?).

dragonwriter•1mo ago
One thing I noticed is that you tested it with very short prompts; Z-Image Turbo really likes long prompts, and recommends using an LLM as a prompt enhancer, even providing a prompt template. [0] I have had pretty good look using an English translation that was posted on Reddit [1] with Qwen3-4B-Instruct locally sometimes modified somewhat for particular tasks; it seems biased to adding text to some images as-is) with short prompts.

[0] https://huggingface.co/Tongyi-MAI/Z-Image-Turbo/discussions/...

[1] https://www.reddit.com/r/StableDiffusion/comments/1p87xcd/zi...

zkmon•2mo ago
Just want to learn - who actually needs or buys up generated images?
nine_k•2mo ago
Some ideas for your consideration:

- Illustrating blog posts, articles, etc.

- A creativity tool for kids (and adults; consider memes).

- Generating ads. (Consider artisan production and specialized venues.)

- Generating assets for games and similar, such as backdrops and textures.

Like any tool, it takes certain skill to use, and the ability to understand the results.

zkmon•2mo ago
Except for gaming, that doesn't sound like a huge market worthy of pouring millions into training these high-quality models. And there is a lot of competition too. I suspect there are some other deep-pocketed customers for these images. Probably animations? movies? TV ads?
pixl97•2mo ago
Propaganda?
nine_k•2mo ago
I'd say that picture ad market alone would suffice.

OTOH these are open-weight models released to the public. We don't get to use more advanced models for free; the free models are likely a byproduct of producing more advanced models anyway. These models can be the freemium tier, or gateway drugs, or a way of torpedoing the competition, if you don't want to believe in the goodwill of their producers.

Zopieux•2mo ago
>A creativity tool for kids (and adults; consider memes).

Fixed that for you: (and adults; consider porn).

I don't think you realize the extent of the “underground” nsfw genai community, which has to rely on open-weight models since API models all have prude filters.

leobg•2mo ago
Dying businesses like newspapers and local banks, who use it to save the money they used to spend on shutterstock images? That’s where I’ve seen it at least. Replacing one useless filler with another.
wongarsu•2mo ago
I follow an author who publishes online on places like Scribblehub and has a modestly successful Patreon. Over the years he has spent probably tens of thousands of dollars on commissioned art for his stories, and he's still spending heavily on that. But as image models have gotten better this has increasingly been supplemented with AI-images for things that are worth a couple dollars to get right with AI, but not a couple hundred to get a human artist to do them

Roughly speaking the art seems to have three main functions:

1. promote the story to outsiders: this only works with human-made art

2. enhance the story for existing readers: AI helps here, but is contentious

3. motivate and inspire the author: works great with AI. The ease of exploration and pseudo-random permutations in the results are very useful properties here that you don't get from regular art

By now the author even has an agreement with an artist he frequently commissions that he can use his style in AI art in return for a small "royalty" payment for every such image that gets published in one of his stories. A solution driven both by the author's conscience and by the demands of the readers

Youden•2mo ago
During the holiday season I've been noticing AI-generated assets on tons of meatspace ads and cheap, themed products.
lomase•2mo ago
Scammers do.
nine_k•2mo ago
It's amazing how much knowledge about the world fits into 16 GiB of the distilled model.
echelon•2mo ago
This is early days, too. We're probably going to get better at this across more domains.

Local AI will eventually be booming. It'll be more configurable, adaptable, hackable. "Free". And private.

Crude APIs can only get you so far.

I'm in favor of intelligent models like Nano Banana over ComfyUI messes (the future is the model, not the node graph).

I still think we need the ability to inject control layers and have full access to the model, because we lose too much utility by not having it.

I think we'll eventually get Nano Banana Pro smarts slimmed down and running on a local machine.

bobsmooth•2mo ago
>Local AI will eventually be booming.

With how expensive RAM currently is, I doubt it.

api•2mo ago
I’m old enough to remember many memory price spikes.
SV_BubbleTime•2mo ago
I remember saving up for my first 128MB stick and the next week it was like triple in price.
lomase•2mo ago
Do you also remember when eveybody was waiting for cryto to cool off to buy a GPU?
echelon•2mo ago
It's temporary. Sam Altman booked all the supply for a year. Give it time to unwind.
gpm•2mo ago
That's a short term effect. Long term Wright's law will kick in and ram will end up cheaper as a result of all the demand. It's not like there's a fundamental bottleneck on how much ram we could produce we're running into, just how much we're currently set up to produce.
muglug•2mo ago
The [demo PDF](https://github.com/Tongyi-MAI/Z-Image/blob/main/assets/Z-Ima...) has ~50 photos of attractive young women sitting/standing alone, and exactly two photos featuring young attractive men on their own.

It's incredibly clear who the devs assume the target market is.

bobsmooth•2mo ago
The ratio of naked female loras compared to naked male loras, or even non-porn loras, on civitai is at least 20 to 1. This shouldn't be surprising.
abbycurtis33•2mo ago
They're correct. This tech, like much before it, is being driven by the base desires of extremely smart young men.
cma•2mo ago
They maybe have an rhlf phase, but I mean there is also just the shape of the distribution of images on the internet and, since this is from alibaba, their part of the internet/social media (Weibo) to consider
weregiraffe•2mo ago
Gooners are base all right, but smart? Seriously? They can't even use their imagination to jerk off.
iamflimflam1•2mo ago
The model is uncensored, so will probably suite that target market admirably.
thih9•2mo ago
Please write what you mean instead of making veiled implications. What is the point of beating around the bush here?

It's not clear to me what you mean either, especially since female models are overwhelmingly more popular in general[1].

[1]: "Female models make up about 70% of the modeling industry workforce worldwide" https://zipdo.co/modeling-industry-statistics/

muglug•2mo ago
> Female models make up about 70% of the modeling industry workforce worldwide

Ok so a ~2:1 ratio. Those examples have a 25:1 ratio.

cwillu•2mo ago
No prize for guessing what the output for an empty prompt is.
killingtime74•2mo ago
It's interesting the handsome guy is literally Tony Leung Chiu-wai, https://www.imdb.com/name/nm0504897/, not even modified
AuryGlenz•2mo ago
Considering how gaga r/stablediffusion is about it, they weren’t wrong. Apparently Flux 2 is dead in the water even though the knowledge it has contained in the model is way, way higher than Z-Image (unsurprisingly).
BoorishBears•2mo ago
Flux 2[dev] is awful.

Z-Image is getting traction because it fits on their tiny GPUs and does porn sure, but even with more compute Flux 2[dev] has no place.

Weak world knowledge, worse licensing, and it ruins the #1 benefit of a larger LLM backbone with post-training for JSON prompts.

LLMs already understand JSON, so additional training for JSON feels like a cheaper way to juice prompt adherence than more robust post-training.

And honestly even "full fat" Flux 2 has no great spot: Nano Banana Pro is better if you need strong editing, Seedream 4.5 is better if you need strong generation.

GaggiX•2mo ago
I didn't even know seedream 4.5 has been released, things move fast, I have used seedream 4 a lot through their API.
cess11•2mo ago
"The Internet is really, really great..."

https://www.youtube.com/watch?v=LTJvdGcb7Fs

mhb•2mo ago
Maybe both women and men prefer looking at attractive women.
Zopieux•2mo ago
Don't forget the expensive sport cars.
kouteiheika•2mo ago
> It's incredibly clear who the devs assume the target market is.

Not "assume". That's what the target market is. Take a look at civitai and see what kind of images people generate and what LoRAs they train (just be sure to be logged in and disable all of the NSFW filters in the options).

iamflimflam1•2mo ago
Yeah - that was a bit of a shock! I'll just unblur these pictures - how hardcore could they be...
CGamesPlay•2mo ago
I get the implication, but this is also the common configuration for fashion / beauty marketing.
khimaros•2mo ago
i have been testing this on my Framework Desktop. ComfyUI generally causes an amdgpu kernel fault after about 40 steps (across multiple prompts), so i spent a few hours building a workaround here https://github.com/comfyanonymous/ComfyUI/pull/11143

overall it's fun and impressive. decent results using LoRA. you can achieve good looking results with as few as 8 inference steps, which takes 15-20 seconds on a Strix Halo. i also created a llama.cpp inherence custom node for prompt enhancement which has been helping with overall output quality.

xfalcox•2mo ago
We have vLLM for running text LLMs in production. What is the equivalent for this model?
mh-•2mo ago
I would say there's isn't an equivalent. Some people will probably tell you ComfyUI - you can expose workflows via API endpoints and parameterize them. This is how e.g. Krita AI Diffusion uses a ComfyUI backend.

For various reasons, I doubt there are any large scale SaaS-style providers operating this in production today.

salty_frog•2mo ago
I'm intrigued by the various reasons why you think there are not any large scale SAAS operating this in production?
threeebo•2mo ago
i dont believe there is a viable use case for large scale AI-generated images as there is for text... except for porn, but many orgs with SAAS capabilities wouldn't touch that
mh-•1mo ago
I was referring to attributes of the ComfyUI software/project, not the idea of serving an image generation API to be clear. There are several of those providers.
idontwantthis•2mo ago
Does it run on apple silicon?
iamflimflam1•2mo ago
It's working for me - it does max out my 64GB though.
sheepscreek•2mo ago
Wow. I always forget how unlike autoregressive models, diffusion models are heavier on resources (for the same number of parameters).
sheepscreek•2mo ago
Apparently - https://github.com/ivanfioravanti/z-image-mps

Supports MPS (Metal Performance Shaders). Using something that skips Python entirely along with a mlx or gguf converted model file (if one exists) will likely be even faster.

opensandwich•2mo ago
(Not tested) though apparently it already exists: https://github.com/leejet/stable-diffusion.cpp/wiki/How-to-U...
BoredPositron•2mo ago
I wish they would have used the WAN vae.
thih9•2mo ago
As an AI outsider with a recent 24GB macbook, can I follow the quick start[1] steps from the repo and expect decent results? How much time would it take to generate a single medium quality image?

[1]: https://github.com/Tongyi-MAI/Z-Image?tab=readme-ov-file#-qu...

altmanaltman•2mo ago
If you don't know anything about AI in terms of how these models are run, comfyui's macos version is probably the easiset to use. There is already a Z-Image workflow that you can get and comfyui will get all the models you need and get it work together. Can expect decent speed
thih9•2mo ago
I'm fine with the quick start steps and I prefer CLI to GUI anyway. But if I try it and find it too complex, I now know what to try instead - thanks.

I'm still curious whether this would run on a MacBook and how long would it take to generate an image. What machine are you using?

egeozcan•2mo ago
Have a 48GB M4 Pro and every inference step takes like 10 seconds on a 1024x1024 image. so six steps and you need a minute. Not terrible, not great.
aleyan•2mo ago
I have a 24GB M5 macbook pro. In ComfyUI using default z-image workflow, generating a single image just took me 399 seconds, during which the computer froze and my airpods lost audio.

On replicate.com a single image takes 1.5s at a price of 1000 images per $1. Would be interesting to see how quick it is on ComfyUI Cloud.

Overall, running generative models locally on Macs seems very poor time investment.

Eisenstein•2mo ago
Try koboldcpp with the kcppt config file. The easiest way by far.

Download the release here

* https://github.com/LostRuins/koboldcpp/releases/tag/v1.103

Download the config file here

* https://huggingface.co/koboldcpp/kcppt/resolve/main/z-image-...

Set +x to the koboldcpp executable and launch it, select 'Load config' and point at the config file, then hit 'launch'.

Wait until the model weights are downloaded and launched, then open a browser and go to:

* http://localhost:5001/sdui

EDIT: This will work for Linux, Windows and Mac

cubefox•2mo ago
I'm particularly impressed by the fact that they seem to aim for photorealism rather than the semi-realistic AI-look that is common in many text-to-image models.
CamperBob2•2mo ago
Exactly, and at the same time, if you want an affected style, all you have to do is ask for it.
bilsbie•2mo ago
What kind of rig is required to run this?
CamperBob2•2mo ago
The simple Python example program runs great on almost any GPU with 8 GB or more memory. Takes about 1.5 seconds per iteration on a 4090.

The bang:buck ratio of Z-Image Turbo is just bonkers.

b0ner_t0ner•2mo ago
CPU can be used:

https://github.com/rupeshs/fastsdcpu/pull/346

reactordev•2mo ago
My issue with this model is it keeps producing Chinese people and Chinese text. I have to very specifically go out of my way to say what kind of race they are.

If I say “A man”, it’s fine. A black man, no problem. It’s when I add context and instructions is just seems to want to go with some Chinese man. Which is fine, but I would like to see more variety of people it’s trained on to create more diverse images. For non-people it’s amazingly good.

orbital-decay•2mo ago
All modern models have their default looks. Meaningful variety of outputs for the same inputs in finetuned models is still an open technical problem. It's not impossible, but not solved either.
SV_BubbleTime•2mo ago
I’m not sure how this is anything but a plus.

It means it respects nationality choices and if you don’t mention it that is your bad prompting and not a failure to not have the default nationality you would prefer.

ForOldHack•2mo ago
It would be more useful to have some standards on what one could expect in terms of hardware requirements and expected performance.
thot_experiment•2mo ago
I've messed with this a bit and the distill is incredibly overbaked. Curious to see the capabilities of the full model but I suspect even the base model is quite collapsed.
gatane•2mo ago
Dude, please give money to artists instead of using genAI
phantomathkg•2mo ago
Unfortunately, another China censored model. Simply ask it to generate "Tank Man" or "Lady Liberty Hong Kong" and the model return a blackboard with text saying "Maybe Not Safe".
user34283•2mo ago
This is an issue with your provider. You need to download the model.

It generates an image of a tank and the statue of liberty for those prompts.

icyfox•2mo ago
We talked about this model in some depth on the last Pretrained episode: https://youtu.be/5weFerGhO84?si=Eh_92_9PPKyiTU_h&t=1743

Some interesting takeaways imo:

- Uses existing model backbones for text encoding & semantic tokens (why reinvent the wheel if you don't need to?)

- Trains on a whole lot of synthetic captions of different lengths, ostensibly generated using some existing vision LLM

- Solid text generation support is facilitated by training on all OCR'd text from the ground truth image. This seems to match how Nano Banana Pro got so good as well; I've seen its thinking tokens sketch out exactly what text to say in the image before it renders.

GuestFAUniverse•2mo ago
All the examples I tried were garbage. Looked decent -- no horrors -- but didn't do the job.

Anything with "most cultures" were manga-influenced comic strips with kanji. Useless.

GaggiX•2mo ago
>manga-influenced comic strips with kanji. Useless.

Are you sure it was Japanese? Because the model is Chinese so it's likely to output Chinese (it happened in my testing).

GuestFAUniverse•2mo ago
Honestly I don't know if it was (Simplified) Chinese, or Japanese Kanji (so, symbols derived from Chinese).

And it isn't even relevant. "most cultures" cannot read anything of it. So what's the nitpicking about?

GaggiX•2mo ago
>So what's the nitpicking about?

Idk I just thought it was funny to read the ignorant comment that called the Chinese model useless because it rendered Chinese text and calling it Japanese. The model is trained to render English or Chinese text.

Tepix•2mo ago
I‘m wondering: Is it faster or slower when spread across two GPUs (RTX3090)?
ArcaneMoose•2mo ago
This model is awesome. I am building an infinite CYOA game and this was a drop-in replacement for my scene image generation. Faster, cheaper, and higher quality than what I was using before!