frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Agentgateway – Next Generation Agentic Proxy for AI Agents and MCP Servers

https://github.com/agentgateway/agentgateway
1•microflash•26s ago•0 comments

Show HN: Customize your keyboard shortcuts in Chrome with a Chrome extension

https://taupiqueur.github.io/chrome-shortcuts/
1•taupiqueur•1m ago•0 comments

Tech talent biz Andela trains up devs in GitHub Copilot

https://www.theregister.com/2025/09/03/andela_github_copilot_training/
1•rntn•1m ago•0 comments

Ghost – AI agent for beautiful presentations

https://useghost.io/
1•eustoria•2m ago•0 comments

Exxon and California Spar in Dueling Lawsuits over Plastics

https://www.nytimes.com/2025/09/01/climate/exxon-california-plastics-defamation-lawsuit.html
1•mitchbob•3m ago•1 comments

Walikancrypt

https://github.com/altilunium/walikancrypt
1•altilunium•6m ago•1 comments

Why does the Chart Increasing emoji show in red?

https://blog.emojipedia.org/why-does-the-chart-increasing-emoji-show-in-red/
1•isagues•6m ago•0 comments

The Honesty Tax

https://www.theargumentmag.com/p/the-honesty-tax
1•amadeuspagel•6m ago•0 comments

How Jet Lag Cost the Global Face of Japan Inc. His Job

https://www.wsj.com/world/asia/how-jet-lag-cost-the-global-face-of-japan-inc-his-job-5672d7a9
1•impish9208•6m ago•1 comments

Hidden Gems in Iceland

https://charlieswanderings.com/iceland/hidden-gems-in-iceland/
1•novateg•7m ago•0 comments

Vibe Coding Failures Prove AI Can't Replace Developers Yet

https://www.finalroundai.com/blog/vibe-coding-failures-that-prove-ai-cant-replace-developers-yet
2•sarathyweb•7m ago•0 comments

Developers lose focus 1,200 times a day – how MCP could change that

https://venturebeat.com/ai/developers-lose-focus-1200-times-a-day-how-mcp-could-change-that
1•rootlyhq•7m ago•0 comments

My review of Amazon's Shareholder letters

https://nandinfinitum.com/posts/amazon-shareholder-letters/
1•nanfinitum•8m ago•0 comments

Raymarching Explained Interactively

https://imadr.me/raymarching-explained-interactively/
1•ibobev•13m ago•0 comments

Building the most accurate DIY CNC lathe in the world [video]

https://www.youtube.com/watch?v=vEr2CJruwEM
2•pillars•14m ago•0 comments

TorkilsTaskSwitcher, a replacement to Windows' Alt-Tab invoked task switcher

https://oelgaard.dk/torkils/?TorkilsTaskSwitcher
1•speckx•14m ago•0 comments

Cross-Platform Window in C

https://imadr.me/cross-platform-window-in-c/
4•ibobev•14m ago•0 comments

Rotations with Quaternions

https://imadr.me/rotations-with-quaternions/
1•ibobev•15m ago•0 comments

Supermarket giant Tesco sues VMware for breach of contract

https://www.theregister.com/2025/09/03/tesco_sues_vmware_broadcom_computacenter/
1•Daviey•16m ago•0 comments

Werner Herzog joined Instagram 9 days ago

https://www.instagram.com/accounts/login/
1•bookofjoe•16m ago•0 comments

Big shakeups to the childhood vaccination schedule could be nearing

https://www.statnews.com/2025/09/03/childhood-vaccine-schedule-at-risk-rfk-cdc-turmoil/
2•bikenaga•17m ago•0 comments

Google's move to restrict Android sideloading could face EU pushback

2•nativeforks•19m ago•0 comments

Scientists Call DOE Climate Report 'Fundamentally Incorrect'

https://insideclimatenews.org/news/02092025/scientists-respond-to-trump-energy-climate-report/
2•ndsipa_pomu•19m ago•1 comments

Global methane footprints growth and drivers 1990-2023

https://www.nature.com/articles/s41467-025-63383-5
1•bikenaga•22m ago•0 comments

Diogenes the Cynic

https://hollisrobbinsanecdotal.substack.com/p/when-i-hear-viewpoint-diversity-i
2•HR01•23m ago•0 comments

AI adoption is a UX problem

https://thenanyu.com/ux.html
2•levmiseri•23m ago•0 comments

Sprouts: Self hosting without sysadmin knowledge

https://judi.systems/sprouts/
2•hsn915•26m ago•0 comments

Show HN: Text2SQL with a Graph Semantic Layer

https://github.com/FalkorDB/QueryWeaver
3•danshalev7•26m ago•0 comments

Deathwatch – Archive Team

https://wiki.archiveteam.org/index.php/Deathwatch
1•frozenseven•26m ago•0 comments

Centralia Mine Fire

https://en.wikipedia.org/wiki/Centralia_mine_fire
1•lisper•26m ago•0 comments
Open in hackernews

Tencent Open Sourced a 3D World Model

https://github.com/Tencent-Hunyuan/HunyuanWorld-Voyager
143•mingtianzhang•2h ago

Comments

mingtianzhang•2h ago
What's your opinion on modeling the world? Some people think the world is 3D, so we need to model the 3D world. Some people think that since human perception is 2D, we can just model the 2D view rather than the underlying 3D world, since we don't have enough 3D data to capture the world but we have many 2D views.

Fixed question: Thanks a lot for the feedback that human perception is not 2D. Let me rephrase the question: since all the visual data we see on computers can be represented as 2D images (indexed by time, angle, etc.), and we have many such 2D datasets, do we still need to explicitly model the underlying 3D world?

AIPedant•2h ago
Human perception is not 2D, touch and proprioception[1] are three-dimensional senses.

And of course it really makes more sense to say human perception is 3+1-dimensional since we perceive the passage of time.

[1] https://en.wikipedia.org/wiki/Proprioception

WithinReason•1h ago
the sensors are 2D
soulofmischief•1h ago
Two of them, giving us stereo vision. We are provided visual cues that encode depth. The ideal world model would at least have this. A world model for a video game on a monitor might be able to get away with no depth information, but a) normal engines do have this information and it would make sense to provide as much data to a general model as possible, and b) the models wouldn't work on AR/VR. Training on stereo captures seems like a win all around.
WithinReason•55m ago
> We are provided visual cues that encode depth. The ideal world model would at least have this.

None of these world models have explicit concepts of depth or 3D structure, and adding it would go against the principle of the Bitter Lesson. Even with 2 stereo captures there is no explicit 3D structure.

2OEH8eoCRo0•1h ago
And the brain does sensor fusion to build a 3d model that we perceive. We don't perceive in 2d

There are other sensors as well. Is the inner ear a 2d sensor?

AIPedant•1h ago
Inner ear is a great example! I mentioned in another comment that if you want to be reductive the sensors in the inner ear - the hairs themselves - are one dimensional, but the overall sense is directly three dimensional. (In a way it's six dimensional since it includes direct information about angular momentum, but I don't think it actually has six independent degrees of freedom. E.g. it might be hard to tell the difference between spinning right-side-up and upside-down with only the inner ear, you'll need additional sense information.)
reactordev•1h ago
Incorrect. My sense of touch can be activated in 3 dimensions by placing my hand near a heat source. Which radiates in 3 dimensions.
AIPedant•1h ago
It is simply wrong to describe touch and proprioception receptors as 2D.

a) In a technical sense the actual receptors are 1D, not 2D. Perhaps some of them are two dimensional, but generally mechanical touch is about pressure or tension in a single direction or axis.

b) The rods and cones in your eyes are also 1D receptors but they combine to give a direct 2D image, and then higher-level processing infers depth. But touch and proprioception combine to give a direct 3D image.

Maybe you mean that the surface of the skin is two dimensional and so is touch? But the brain does not separate touch on the hand from its knowledge of where the hand is in space. Intentionally confusing this system is the basis of the "rubber hand illusion" https://en.wikipedia.org/wiki/Body_transfer_illusion

echelon•1h ago
The GPCRs [1] that do most of our sense signalling are each individually complicated machines.

Many of our signals are "on" and are instead suppressed by detection. Ligand binding, suppression, the signalling cascade, all sorts of encoding, ...

In any case, when all of our senses are integrated, we have rich n-dimensional input.

- stereo vision for depth

- monocular vision optics cues (shading, parallax, etc.)

- proprioception

- vestibular sensing

- binaural hearing

- time

I would not say that we sense in three dimensions. It's much more.

[1] https://en.m.wikipedia.org/wiki/G_protein-coupled_receptor

imtringued•1h ago
2D models don't have object persistence, because they store information in the viewport. Back when OpenAI released their Sora teasers, they had some scenes where they did a 360° rotation and it produced a completely different backdrop.
hambes•1h ago
you're telling me my depth perception is not creating a 3D model of the world in my brain?
rubzah•1h ago
It's 2D if you only have one eye.
__alexs•1h ago
It's not even 2D with one eye. We can estimate distance purely from your eyes focal point.
yeoyeo42•1h ago
with one eye you have temporal parallax, depth cues (ordering of objects in your vision), lighting cues, relative size of objects (things further away are smaller) together with your learned comparison size etc.
supermatt•1h ago
Nope. There are a number of monocular depth cues: https://en.wikipedia.org/wiki/Depth_perception#Monocular_cue...
glitchc•1h ago
It's simple: Those who think that human perception is 2D are wrong.
KaiserPro•1h ago
So a lot of text to "world" engines have been basically 2d, in that they create a static background and add sprites in to create the illusion of 3D.

I'm not entirely convinced that this isn't one of those, or if its not it sure as shit was trained on one.

reactordev•1h ago
You have two eyes for a reason. The world is not 2D.
SirHackalot•1h ago
> Minimum: The minimum GPU memory required is 60GB for 540p.

Cool, I guess… If you have tens of thousands of $ to drop on a GPU for output that’s definitely not usable in any 3D project out-of-the-box.

y-curious•1h ago
I mean, still awesome that it's OSS. Can probably just rent GPU time online for this
HPsquared•1h ago
I assume it can be split between multiple GPUs, like LLMs can. Or hire an H100 for like $3/hr.
kittoes•1h ago
https://www.amd.com/en/products/accelerators/radeon-pro/amd-...

Is more approachable than one might think, as you can currently find two of these for less than 1,000 USD.

esafak•1h ago
How much performance penalty is there for doubling up? What about 4x?
kittoes•49m ago
I just found out about these last week and haven't received the hardware yet, so I can't give you real numbers. That said, one can probably expect at least a 10-30% penalty when the cards need to communicate with one another. Other workloads that don't require constant communication between cards can actually expect a performance boost. Your mileage will vary.
iamsaitam•1h ago
Interesting that they chose the color red in the comparison table to determine the best score of that entry.
FartyMcFarter•1h ago
Just like the stock market in China. Red means the price is going up, green means it's going down.
jsheard•1h ago
That's also why the stonks-going-up emoji traditionally has a red line, Japan shares that convention.

https://blog.emojipedia.org/why-does-the-chart-increasing-em...

dlisboa•1h ago
By the way, people might think this has to do with communism but it’s cultural and way before the 20th century. Red is associated with happiness and celebration.
MengerSponge•1h ago
Almost like the communists chose what iconography to use!
mananaysiempre•57m ago
The (blood-)red flag as an anti-monarchist symbol originates in the French Revolution, was adopted by the Bolshevik faction (“the Reds”) in the Russian Civil War, and spread from there.
kridsdale1•45m ago
And ironically the news networks in 2000 chose red to show Bush’s electoral votes vs Gore, and thus we retain the notion of Red States and Blue States, even though it’s backwards.
geeunits•1h ago
You'll notice it in every piece of western propaganda too. From movies to fashion. Red is the china call
idiotsecant•1h ago
It would be a very uninteresting choice in china. Color is partially a cultural construction. Red doesn't mean the same thing there that it does in the west.
ambitiousslab•1h ago
This is not open source. It is weights-available.

Also, there is no training data, which would be the "preferred form" of modification.

From their license: [1]

  If, on the Tencent HunyuanWorld-Voyager version release date, the monthly active users of all products or services made available by or for Licensee is greater than 1 million monthly active users in the preceding calendar month, You must request a license from Tencent, which Tencent may grant to You in its sole discretion, and You are not authorized to exercise any of the rights under this Agreement unless or until Tencent otherwise expressly grants You such rights.

  You must not use the Tencent HunyuanWorld-Voyager Works or any Output or results of the Tencent HunyuanWorld-Voyager Works to improve any other AI model (other than Tencent HunyuanWorld-Voyager or Model Derivatives thereof).
As well as an acceptable use policy:

  Tencent endeavors to promote safe and fair use of its tools and features, including Tencent HunyuanWorld-Voyager. You agree not to use Tencent HunyuanWorld-Voyager or Model Derivatives:
  1. Outside the Territory;
  2. In any way that violates any applicable national, federal, state, local, international or any other law or regulation;
  3. To harm Yourself or others;
  4. To repurpose or distribute output from Tencent HunyuanWorld-Voyager or any Model Derivatives to harm Yourself or others; 
  5. To override or circumvent the safety guardrails and safeguards We have put in place;
  6. For the purpose of exploiting, harming or attempting to exploit or harm minors in any way;
  7. To generate or disseminate verifiably false information and/or content with the purpose of harming others or influencing elections;
  8. To generate or facilitate false online engagement, including fake reviews and other means of fake online engagement;
  9. To intentionally defame, disparage or otherwise harass others;
  10. To generate and/or disseminate malware (including ransomware) or any other content to be used for the purpose of harming electronic systems;
  11. To generate or disseminate personal identifiable information with the purpose of harming others;
  12. To generate or disseminate information (including images, code, posts, articles), and place the information in any public context (including –through the use of bot generated tweets), without expressly and conspicuously identifying that the information and/or content is machine generated;
  13. To impersonate another individual without consent, authorization, or legal right;
  14. To make high-stakes automated decisions in domains that affect an individual’s safety, rights or wellbeing (e.g., law enforcement, migration, medicine/health, management of critical infrastructure, safety components of products, essential services, credit, employment, housing, education, social scoring, or insurance);
  15. In a manner that violates or disrespects the social ethics and moral standards of other countries or regions;
  16. To perform, facilitate, threaten, incite, plan, promote or encourage violent extremism or terrorism;
  17. For any use intended to discriminate against or harm individuals or groups based on protected characteristics or categories, online or offline social behavior or known or predicted personal or personality characteristics;
  18. To intentionally exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
  19. For military purposes;
  20. To engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or other professional practices.
[1] https://github.com/Tencent-Hunyuan/HunyuanWorld-Voyager/blob...
vintermann•1h ago
The exclusion of EU, UK and South Korea suggests to me they've trained on data those countries would be mad they trained on/would demand money for training on.
heod749•1h ago
>The exclusion of EU, UK and South Korea suggests to me they've trained on data those countries would be mad they trained on/would demand money for training on.

Or, those countries are trying to regulate AI.

Hard to feel bad for EU/UK. They tried their best to remain relevant, but lost in the end (talent, economy, civil rights).

thrance•30m ago
Peak American thinking: megacorps and dictatorships stealing data with no respect whatsoever for privacy and not giving anything back is good. Any attempt to defend oneself from that is foolish and should be mocked. I wish you people could realize you're getting fucked over as much as the rest of us.
tbrownaw•1h ago
> Also, there is no training data, which would be the "preferred form" of modification.

Isn't fine-tuning a heck of a lot cheaper?

Nevermark•6m ago
Fine tuning with original data plus fine tuning data has more predictable results.

Just training on new data moves a model away from its previous behavior, to an unpredictably degree.

You can’t even test for the change without the original data.

NitpickLawyer•35m ago
> This is not open source. It is weights-available.

> Also, there is no training data, which would be the "preferred form" of modification.

This is not open source because the license is not open source. The second line is not correct, tho. "Preferred form" of modification are weights, not data. Data is how you modify those weights.

htrp•29m ago
outside of ai2, not sure anyone actually truly is open-source ai models (training logs, data etc).

I think at this point, open source is practically shorthand for weights available

Ragnarork•1h ago
The license used for this is quite a read.

  Available to the world except the European Union, the UK, and South Korea
Not sure what led to that choice. I'd have expected either the U.S. & Canada to be in there, or not these.

  3. DISTRIBUTION.
  [...]
  c. You are encouraged to: (i) publish at least one technology introduction blogpost or one public statement expressing Your experience of using the Tencent HunyuanWorld-Voyager Works; and (ii) mark the products or services developed by using the Tencent HunyuanWorld-Voyager Works to indicate that the product/service is “Powered by Tencent Hunyuan”; [...]
What's that doing in the license? What's the implications of a license-listed "encouragement"?
mushufasa•1h ago
The EU and others listed are actively trying to regulate AI. Permissive OSS libraries' "one job" is to disclaim liability. This is interesting that they are just prohibiting usage altogether in jurisdictions where the definition of liability is uncertain & worrying to the authors.
amelius•36m ago
That would be an extremely lazy way of writing a license.
jandrewrogers•17m ago
Unlikely laziness, since they went to the effort of writing a custom license in the first place.

A more plausible explanation is the requirements and obligations of those markets are ambiguous or open-ended in such a way that they cannot be meaningfully limited by a license, per the lawyers they retain to create things like licenses. Lawyers don’t like vague and uncertain risk, so they advised the company to reduce their risk exposure by opting out of those markets.

NullCascade•54m ago
Maybe private Chinese AI labs consider EU/UK regulators a bigger threat than US anti-China hawks.
NitpickLawyer•38m ago
> Not sure what led to that choice.

It's the EU AI act. I've tried their cute little app a week ago, designed to let you know if you comply, what you need to report and so on. I got a basically yes, but likely no, still have to register to bla-bla and announce yak-yak and do the dooby-doo, after selecting SME - open source - research - no client facing anything.

It was a mess when they proposed it, it was said to be better while they were working on it, turns out to be as unclear and as bureaucratic now that it's out.

kookamamie•27m ago
At the same time Time selected Henna Virkkunen on their AI 200 list: https://time.com/collections/time100-ai-2025/7305860/henna-v... - they are one of the architects of this AI Act nonsense.
flanked-evergl•25m ago
If I was Russia and/or China and I wanted to eliminate EU as a potential rival economically and militarily, then I don't think I could have come up with a better way to do it than EU regulations. If it was not for the largess of the US, then EU would become a vassal of Russia and/or China. And I think the US is running out of good will very rapidly. The EU could, of course, shape up, but it won't.
L_226•8m ago
Which app is that?
NitpickLawyer•6m ago
Check here - https://artificialintelligenceact.eu/assessment/eu-ai-act-co...

Start on the right, and click through the options. At the end you'll get a sort of assessment of what you need to do.

whimsicalism•37m ago
EU has very difficult AI and data regulations, not sure about South Korea
wkat4242•23m ago
I wonder if you can still download and use it here in the EU.. I don't care about licensing legalese, but I guess you have to sign up somewhere to get the goods?
notpushkin•16m ago
It’s on HF: https://huggingface.co/tencent/HunyuanWorld-Voyager
bilsbie•1h ago
I’m waiting like crazy for one of these to show up on vr.
jsheard•1h ago
Please don't hold your breath, they're still pretty far from high-res 120fps with consistent stereo and milliseconds of latency.
geokon•57m ago
Isn't it picture to 3D model? You'd generate the environment/model ahead of time and then "dive in" to the photo
jsheard•53m ago
I suppose that's an option yeah, but when people envision turning this kind of thing into a VR holodeck I think they're expecting unbounded exploration and interactivity, which precludes pre-baking everything. Flattening the scene into a diorama kind of defeats the point.
kridsdale1•46m ago
Check out visionOS 26’s Immersive Photo mode. Any photo in your iCloud library gets converted by an on device model to (I assume) a Gaussian Splat 3D scene that you can pan and dolly around in. It’s the killer feature that justifies the whole cost of Vision Pro. The better the source data the better it works.

I can literally walk in to scenes I shot on my Nikon D70 in 2007 and they, and the people, look real.

user_7832•1h ago
I see a lot of skeptical folks here... isn't this the first such model? I remember seeing a lot of image to 3d models before, but they'd all produce absurd results in a few moments. This seems to produce really good output in comparison.
explorigin•1h ago
If you click on the link, they show a comparison chart with other similar models.
neuronic•1h ago
> isn't this the first such model?

The linked Github page has a comparison with other world models...

geokon•59m ago
Seems the kind of thing StreetView data would have been perfect to train on.

I wonder if you could loop back the last frame of each video to extend the generated world further. Creating a kind of AI fever dream

kridsdale1•49m ago
Why the past tense? Google is holding on to all of that, going back years.
NullCascade•52m ago
What is currently the best model (or multi-model process) to go from text-to-3D-asset?

Ideally based on FOSS models.

neutronicus•43m ago
Piggybacking ... what about text-to-sprite-sheet? Or even text-and-single-source-image-to-sprite-sheet?
stargrazer•36m ago
It explicitly says using a single picture. Wouldn't the world become even more expressive if multiple pictures could be added, such as in a photogrammetry scenario?
amelius•35m ago
Can I use this to replace a LiDAR?
garbthetill•25m ago
if it does, then elon really won the bet of no lidar
incone123•9m ago
It's generating a 3d world from a photo or other image, rather than giving you a 3d model of the real world.
amelius•7m ago
Look at the examples. It can generate a depth map.
ENGNR•6m ago
Depends how many liberties it takes in imagining the world

Lidar is direct measurement

krystofee•24m ago
I think its a matter of time when we will have photorealistic playable computer games generated by these engines.
gadders•22m ago
And hopefully AI-powered NPCs to fight against/interact with.
Keyframe•6m ago
There's a reason Tencent is doing this https://en.wikipedia.org/wiki/Tencent#Foreign_studio_assets