https://chat.qwen.ai/s/0f9d558c-2108-4350-98fb-6ee87065d587?...
As an example, when asked to change the background, it also completely changed the bear (it has the same shirt but the fur and face are clearly different), and also: when it turned the bear in a balloon, it changed the background (removing the pavement) and lost the left seed in the watermelon.
It is something that can be fixed with better prompting, or is it a limitation of the model/architecture?
Firefox on ios ftr
Even when used to add new details it sticks very strongly to the existing images overall aesthetic.
https://specularrealms.com/ai-transcripts/experiments-with-f...
“I get it” - is actually just some arbitrary personal benchmark.
The reason we use math in physics is because of its specificity. The same reason coding is so hard [0,1]. I think people aren't giving themselves enough credit here for how much they (you) understand about things. It is the nuances that really matter. There's so much detail here and we often forget how important they are because it is just normal to us. It's like forgetting about the ground you walk upon.
I think something everyone should read about is Asimov's "Relativity of Wrong"[2]. This is what we want to see in these systems if we want to start claiming they understand things. We want to see them to deduction and abduction. To be able to refine concepts and ideas. To be able to discover things that are more than just a combination of things they've ingested. What's really difficult here is that we train these things on all human knowledge and just reciting back that knowledge doesn't demonstrate intelligence. It's very unlikely that they losslessly compress that knowledge into these model sizes, but without very deep investigation into that data and probing at this knowledge it is very hard to understand what it knows and what it memorizes. Really, this is a very poor way to go about trying to make intelligence[3], or at least making intelligence and ending up knowing it is intelligent.
To really "understand" things we need to be able to propose counterfactuals[4]. Every physics statement is a counterfactual statement. Take F=ma as a trivial example. We can modify the mass or the acceleration to our heart's content and still determine the force. We can observe a specific mass moving at a specific acceleration and then ask the counterfactual "what if it was twice as heavy?" (twice the mass). *We can answer that!* In fact, your mental model of the world does this too! Yo may not be describing it with math (maybe you are ;) but you are able to propose counterfactuals and do a pretty good job a lot of the time. Doesn't mean you always need to be right though. But the way our heads work is through these types of systems. You daydream these things, you imagine them while you play, and all sorts of things. This, I can say, with high confidence, is not something modern ML (AI) systems do.
[0] https://youtube.com/watch?v=cDA3_5982h8
[1] Code is math. There's an isomorphism between Turing complete languages and computable mathematics. You can look more into my namesake, church, and Turing if you want to get more formal or wait for the comment that corrects a nuanced mistake here (yes, it exists). Also, note that physics and math are not the same thing, but mathematics is unreasonably effective (yes, this is a reference).
[2] https://hermiene.net/essays-trans/relativity_of_wrong.html
[3] This is a very different statement than "making something useful." Without a doubt these systems are useful. Do not conflate these
rushingcreek•4h ago
If Qwen is concerned about recouping its development costs, I suggest looking at BFL's Flux Kontext Dev release from the other day as a model: let researchers and individuals get the weights for free and let startups pay for a reasonably-priced license for commercial use.
Jackson__•3h ago
So it is trained off OAI, as closed off as OAI and most importantly: worse than OAI. What a bizarre strategy to gate-keep this behind an API.
[0]
https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VLo/cas...
https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VLo/cas...
https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VLo/cas...
echelon•3h ago
Both Alibaba and Tencent championed open source (Qwen family of models, Hunyuan family of models), but now they've shut off the releases.
There's totally a play where models become loss-leader for SaaS/PaaS/IaaS and where they extinguish your closed competition.
Imagine spreading your model so widely then making the terms: "do not use in conjunction with closed source models".
diggan•3h ago
What are you talking about? Feels like a very strong claim considering there are ongoing weight releases, wasn't there one just today or yesterday from a Chinese company?
yorwba•1h ago
New entrants may keep releasing weights as a marketing strategy to gain name recognition, but once they have established themselves (and investors start getting antsy about ROI) making subsequent releases closed is the logical next step.
vachina•3h ago
Jackson__•2h ago
It's really too close to be anything but a model trained on these outputs, the whole vibe just screams OAI.
VladVladikoff•2h ago
refulgentis•23m ago
Let's say it's 100 images because you're doing a quick LoRA. That'd be about $5.00 at medium quality (~$0.05/image) or $1 at low. ~($0.01/image)
Let's say you're training a standalone image model. OOM of input images is ~1B, so $10M at low and $50M at high.
250 tokens / image for low, ~1000 for medium, which gets us to:
Fastest LoRA? $1-$4. 25,000 - 100,000 tokens output. All the training data for a new image model? $10M-$50M, 2.5B - 10B tokens out.
diggan•3h ago
But if you're suggesting they should do open weights, doesn't that mean people should be able to use it freely?
You're effectively suggesting "trial-weights", "shareware-weights", "academic-weights" or something like that rather than "open weights", which to me would make it seem like you can use them for whatever you want, just like with "open source" software. But if it misses a large part of what makes "open source" open source, like "use it for whatever you want", then it kind of gives the wrong idea.
rushingcreek•3h ago
I think that releasing the weights openly but with this type of dual-license (hence open weights, but not true open source) is an acceptable tradeoff to get more model developers to release models openly.
diggan•2h ago
But isn't that true for software too? Software is expensive to develop, and lots of developers/companies are choosing not to make their code public for free. Does that mean you also feel like it would be OK to call software "open source" although it doesn't allow usage for any purpose? That would then lead to more "open source" software being released, at least for individuals and researchers?
rushingcreek•2h ago
diggan•2h ago
I mean it wasn't binary earlier, it was "to get more model developers to release", so not a binary choice, but a gradient I suppose. Would you still make the same call for software as you do for ML models and weights?
echelon•3h ago
Alibaba just shut off the Qwen releases
Tencent just shut off the Hunyuan releases
Bytedance just released Seedream, but it's closed
It's seems like it's over.
They're still clearly training on Western outputs, though.
I still suspect that the strategic thing to do would be to become 100% open and sell infra/service.
pxc•3h ago
natrys•3h ago
Alibaba from beginning had some series of models that are always closed-weights (*-max, *-plus, *-turbo etc. but also QvQ), It's not a new development, nor does it prevent their open models. And the VL models are opened after 2-3 months of GA in API.
> Tencent just shut off the Hunyuan releases
Literally released one today: https://huggingface.co/tencent/Hunyuan-A13B-Instruct
echelon•41m ago
Hunyuan 3D 2.5, which is an order of magnitude better than Hunyuan 3D 2.1, is also being withheld.
I suspect that now that they feel these models are superior to Western releases in several categories, they no longer have a need to release these weights.
logicchains•3h ago
echelon•40m ago
jacooper•1m ago
dheera•2h ago
> let researchers and individuals get the weights for free and let startups pay for a reasonably-priced license for commercial use
I'm personally doubtful companies can recoup tens of millions of dollars in investment, GPU hours, and engineering salaries from image generation fees.