frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

How did China come to dominate the world of electric cars?

https://www.technologyreview.com/2023/02/21/1068880/how-did-china-dominate-electric-cars-policy/
1•rbanffy•1m ago•0 comments

How Chinese Carmakers Doubled Their Share of the European Market

https://www.nytimes.com/2025/06/18/business/china-byd-cars-europe.html
1•bookofjoe•2m ago•1 comments

Dia by the Browser Company

https://www.diabrowser.com/
1•kaycebasques•3m ago•0 comments

Blind spots on American cars are expanding

https://usa.streetsblog.org/2025/06/26/study-americas-blind-spots-are-expanding
1•anigbrowl•3m ago•0 comments

Robotic Heart Transplant Marks Surgical Breakthrough – Neuroscience News

https://neurosciencenews.com/robotic-heart-transplant-29354/
1•MaysonL•4m ago•0 comments

Conversations with Bill, Steve, and Satya

https://unlocked.microsoft.com/gates-ballmer-nadella/
1•mooreds•4m ago•0 comments

What is Acid Communism? (2019)

https://medium.com/swlh/what-is-acid-communism-e5c65ecf6188
1•iscream26•9m ago•1 comments

Brazil Supreme Court rules digital platforms to be liable for users' posts

https://apnews.com/article/brazil-supreme-court-social-media-ruling-324b9d79caa9f9e063da8a4993e382e1
2•rogerthis•11m ago•0 comments

Ask HN: What CS or Software Engineering subfields are worth specializing in?

3•pixelworm•11m ago•0 comments

CISA: AMI MegaRAC bug enabling server hijacks exploited in attacks

https://www.bleepingcomputer.com/news/security/cisa-ami-megarac-bug-that-lets-hackers-brick-servers-now-actively-exploited/
3•cws•11m ago•0 comments

Private Equity's Medicaid Problem

https://www.axios.com/pro/health-tech-deals/2025/06/26/private-equity-medicaid-sales
1•cwwc•12m ago•0 comments

H-1B Middlemen Bring Cheap Labor to Citi, Capital One

https://www.bloomberg.com/graphics/2025-h1b-visa-middlemen-cheap-labor-for-us-banks/
4•mfiguiere•12m ago•2 comments

Ranking generative AI companies from most to least evil

https://cwodtke.medium.com/i-love-generative-ai-and-hate-the-companies-building-it-3fb120e512ac
2•anigbrowl•13m ago•0 comments

Code Researcher: Deep Research Agent for Large Systems Code and Commit History

https://arxiv.org/abs/2506.11060
1•PaulHoule•14m ago•0 comments

Leading Without Non-Negotiables Is Just Damage Control

https://scottkosman.com/post/blog/leading-without-non-negotiables/
1•mooreds•18m ago•0 comments

Show HN: A Beautiful Python GUI Framework

https://github.com/mebaadwaheed/winup
3•ebaadesque•23m ago•1 comments

Hosting Website on Phone

https://rohanrd.xyz/posts/hosting-website-on-phone/
2•quaintdev•24m ago•0 comments

Call Center Workers Are Tired of Being Mistaken for AI

https://www.bloomberg.com/news/articles/2025-06-27/as-ai-infiltrates-call-centers-human-workers-are-being-mistaken-for-bots
4•koolhead17•25m ago•1 comments

Any employee can build apps and it's hell for developers

https://writer.com/engineering/agent-development-lifecycle/
4•bproper•26m ago•1 comments

Managed Servers

https://www.hetzner.com/managed-server/
3•dumbledoren•28m ago•1 comments

Show HN: WebDev Club A curated space for web developers to learn, share and grow

https://webdev.club/
1•kkxingh•34m ago•0 comments

Moving My Website from Static Hosting to Caddy (2023)

https://www.apalrd.net/posts/2023/studio_website/
2•indigodaddy•34m ago•0 comments

Play "The Plot of the Phantom" the text adventure that took 40 years to finish

https://scottandrew.com/blog/2025/06/you-can-now-play-plot-of-the-phantom-the-text-adventure-game/
1•SeenNotHeard•35m ago•0 comments

The Alters: unintentionally the realest game about parenting I've ever played

https://www.theguardian.com/games/2025/jun/27/the-alters-unintentionally-the-realest-game-about-parenting-ive-ever-played
1•robaato•36m ago•0 comments

Show HN: Kokonut UI – open-source UI Library

https://kokonutui.com/
1•kokonutt_•36m ago•0 comments

New Study Finds Glass Bottles Leak 50x More Microplastics Than Plastic

https://www.sustainability-times.com/research/glass-is-the-real-threat-new-study-finds-glass-bottles-leak-50x-more-microplastics-than-plastic-alarming-scientists-globally/
2•mgh2•37m ago•1 comments

Coding Agents 101: Some tips for using agents productively

https://devin.ai/agents101
3•waldenyan20•37m ago•1 comments

James Webb Space Telescope Reveals Its First Direct Image of an Exoplanet

https://www.smithsonianmag.com/smart-news/james-webb-space-telescope-reveals-its-first-direct-image-discovery-of-an-exoplanet-180986886/
1•divbzero•42m ago•0 comments

AI CEO

https://replaceyourboss.ai/
2•smartmic•42m ago•0 comments

I built a Figma plugin to create 3D device mockups without leaving Figma

https://www.figma.com/community/plugin/1518871640838086765/mockit-3d-device-mockups
1•lucybuilds•43m ago•0 comments
Open in hackernews

Qwen VLo: From "Understanding" the World to "Depicting" It

https://qwenlm.github.io/blog/qwen-vlo/
96•lnyan•3h ago

Comments

rushingcreek•3h ago
It doesn't seem to have open weights, which is unfortunate. One of Qwen's strengths historically has been their open-weights strategy, and it would have been great to have a true open-weights competitor to 4o's autoregressive image gen. There are so many interesting research directions that are only possible if we can get access to the weights.

If Qwen is concerned about recouping its development costs, I suggest looking at BFL's Flux Kontext Dev release from the other day as a model: let researchers and individuals get the weights for free and let startups pay for a reasonably-priced license for commercial use.

Jackson__•2h ago
It's also very clearly trained on OAI outputs, which you can tell from the orange tint to the images[0]. Did they even attempt to come up with their own data?

So it is trained off OAI, as closed off as OAI and most importantly: worse than OAI. What a bizarre strategy to gate-keep this behind an API.

[0]

https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VLo/cas...

https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VLo/cas...

https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VLo/cas...

echelon•2h ago
The way they win is to be open. I don't get why China is shutting down open source. It was a knife at the jugular of US tech dominance.

Both Alibaba and Tencent championed open source (Qwen family of models, Hunyuan family of models), but now they've shut off the releases.

There's totally a play where models become loss-leader for SaaS/PaaS/IaaS and where they extinguish your closed competition.

Imagine spreading your model so widely then making the terms: "do not use in conjunction with closed source models".

diggan•1h ago
> I don't get why China is shutting down open source [...] now they've shut off the releases

What are you talking about? Feels like a very strong claim considering there are ongoing weight releases, wasn't there one just today or yesterday from a Chinese company?

yorwba•4m ago
The problem with giving away weights for free while also offering a hosted API is that once the weights are out there, anyone else can also offer it as a hosted API with similar operating costs, but only the releasing company had the initial capital outlay of training the model. So everyone else is more profitable! That's not a good business strategy.

New entrants may keep releasing weights as a marketing strategy to gain name recognition, but once they have established themselves (and investors start getting antsy about ROI) making subsequent releases closed is the logical next step.

vachina•2h ago
Huh, so orange tint = openAI output? Maybe their training process ended up causing the model to prefer that color balance.
Jackson__•1h ago
Here's an extreme example that shows how it continually adds more orange: https://old.reddit.com/r/ChatGPT/comments/1kawcng/i_went_wit...

It's really too close to be anything but a model trained on these outputs, the whole vibe just screams OAI.

VladVladikoff•45m ago
What would be the approximate cost of doing this? How many million API requests must be made? How many tokens in total?
diggan•2h ago
> One of Qwen's strengths historically has been their open-weights strategy [...] let researchers and individuals get the weights for free and let startups pay for a reasonably-priced license for commercial use.

But if you're suggesting they should do open weights, doesn't that mean people should be able to use it freely?

You're effectively suggesting "trial-weights", "shareware-weights", "academic-weights" or something like that rather than "open weights", which to me would make it seem like you can use them for whatever you want, just like with "open source" software. But if it misses a large part of what makes "open source" open source, like "use it for whatever you want", then it kind of gives the wrong idea.

rushingcreek•2h ago
I am personally in favor of true open source (e.g. Apache 2 license), but the reality is that these model are expensive to develop and many developers are choosing not to release their model weights at all.

I think that releasing the weights openly but with this type of dual-license (hence open weights, but not true open source) is an acceptable tradeoff to get more model developers to release models openly.

diggan•1h ago
> but the reality is that these model are expensive to develop and many developers are choosing not to release their model weights at all.

But isn't that true for software too? Software is expensive to develop, and lots of developers/companies are choosing not to make their code public for free. Does that mean you also feel like it would be OK to call software "open source" although it doesn't allow usage for any purpose? That would then lead to more "open source" software being released, at least for individuals and researchers?

rushingcreek•1h ago
Yes, I think the same analogy applies. Given a binary choice of a developer not releasing any code at all or releasing code under this type of binary "open-code" license, I'd always take the latter.
diggan•49m ago
> Given a binary choice of a developer not releasing any code at all

I mean it wasn't binary earlier, it was "to get more model developers to release", so not a binary choice, but a gradient I suppose. Would you still make the same call for software as you do for ML models and weights?

echelon•2h ago
The era of open weights from China appears to be over for some reason. It's all of a sudden and seems to be coordinated.

Alibaba just shut off the Qwen releases

Tencent just shut off the Hunyuan releases

Bytedance just released Seedream, but it's closed

It's seems like it's over.

They're still clearly training on Western outputs, though.

I still suspect that the strategic thing to do would be to become 100% open and sell infra/service.

pxc•2h ago
Why? And can we really say that already? Wasn't the Qwen3 release still very recent?
natrys•2h ago
> Alibaba just shut off the Qwen releases

Alibaba from beginning had some series of models that are always closed-weights (*-max, *-plus, *-turbo etc. but also QvQ), It's not a new development, nor does it prevent their open models. And the VL models are opened after 2-3 months of GA in API.

> Tencent just shut off the Hunyuan releases

Literally released one today: https://huggingface.co/tencent/Hunyuan-A13B-Instruct

logicchains•2h ago
What do you mean Tencent just shut off the Hunyuan releases? There was another open weights release just today: https://huggingface.co/tencent/Hunyuan-A13B-Instruct . And the latest Qwen and DeepSeek open weight releases were under 2 months ago, there hasn't been enough time for them to finish a new version since then.
dheera•1h ago
> One of Qwen's strengths historically has been their open-weights strategy

> let researchers and individuals get the weights for free and let startups pay for a reasonably-priced license for commercial use

I'm personally doubtful companies can recoup tens of millions of dollars in investment, GPU hours, and engineering salaries from image generation fees.

aredox•3h ago
It don't think these words mean what they think they do...
frotaur•3h ago
Anybody knows if there is a technical report for this, or for other models that generate images in a similar way? I'd really like to understand the architecture behind 4o-like image gen.
rickydroll•3h ago
To my eyes, all these images hit the uncanny valley. All the colors and the shadows are just off.
skybrian•2h ago
I tried the obligatory pelican riding a bicycle (as an image, not SVG) and some accordion images. It has a bit of trouble with fingers and wth getting the black keys right. It’s fairly fast.

https://chat.qwen.ai/s/0f9d558c-2108-4350-98fb-6ee87065d587?...

hexmiles•2h ago
While looking at the examples of editing the bear image, I noticed that the model seemed to change more things than were strictly asked.

As an example, when asked to change the background, it also completely changed the bear (it has the same shirt but the fur and face are clearly different), and also: when it turned the bear in a balloon, it changed the background (removing the pavement) and lost the left seed in the watermelon.

It is something that can be fixed with better prompting, or is it a limitation of the model/architecture?

djaychela•2h ago
How do you stop the auto reading out? Why can't websites just sit there and wait until I ask for them to do something? It full screen auto played a video on watch and then just started reading?

Firefox on ios ftr

b0a04gl•2h ago
image gets compressed into 256 tokens before language model sees it. ask it to add a hat and it redraws the whole face; because objects aren't stored as separate things. there's no persistent bear in memory. it all lives inside one fused latent soup, they're fresh samples under new constraints. every prompt tweak rebalances the whole embedding. that's why even small changes ripple across the image. i notice it like single shot scene synthesis, which is good for diff usecases
leodriesch•2h ago
That's what I really like about Flux Kontext, it has similar editing capabilities to the multimodal models, but doesn't mess up the details. The editing with gpt-image-1 only really works for complete style changes like "make this ghibli", but not adding glasses to a photorealistic image and have it retain all the details.
veltas•1h ago
Rather I think machine learning has made a lot more progress 'depicting' the world than 'understanding' it.
ivape•1h ago
Why do you think humans understand the world any better? We have emotion about the world but emotions do not grant you understanding, where “understanding” is still something you would still need to define.

“I get it” - is actually just some arbitrary personal benchmark.

godelski•2m ago
As a ML researcher and a degree holding physicist, I'm really hesitant to use the words "understanding" and "describing" (much less hesitant) around these models. I don't find the language helpful and think it's mostly hateful tbh.

The reason we use math in physics is because of its specificity. The same reason coding is so hard [0,1].

[0] https://youtube.com/watch?v=cDA3_5982h8

[1] Code is math. There's an isomorphism between Turing complete languages and computable mathematics. You can look more into my namesake, church, and Turing if you want to get more formal or wait for the comment that corrects a nuanced mistake here (yes, it exists)