Cool, I guess… If you have tens of thousands of $ to drop on a GPU for output that’s definitely not usable in any 3D project out-of-the-box.
Is more approachable than one might think, as you can currently find two of these for less than 1,000 USD.
https://blog.emojipedia.org/why-does-the-chart-increasing-em...
Also, there is no training data, which would be the "preferred form" of modification.
From their license: [1]
If, on the Tencent HunyuanWorld-Voyager version release date, the monthly active users of all products or services made available by or for Licensee is greater than 1 million monthly active users in the preceding calendar month, You must request a license from Tencent, which Tencent may grant to You in its sole discretion, and You are not authorized to exercise any of the rights under this Agreement unless or until Tencent otherwise expressly grants You such rights.
You must not use the Tencent HunyuanWorld-Voyager Works or any Output or results of the Tencent HunyuanWorld-Voyager Works to improve any other AI model (other than Tencent HunyuanWorld-Voyager or Model Derivatives thereof).
As well as an acceptable use policy: Tencent endeavors to promote safe and fair use of its tools and features, including Tencent HunyuanWorld-Voyager. You agree not to use Tencent HunyuanWorld-Voyager or Model Derivatives:
1. Outside the Territory;
2. In any way that violates any applicable national, federal, state, local, international or any other law or regulation;
3. To harm Yourself or others;
4. To repurpose or distribute output from Tencent HunyuanWorld-Voyager or any Model Derivatives to harm Yourself or others;
5. To override or circumvent the safety guardrails and safeguards We have put in place;
6. For the purpose of exploiting, harming or attempting to exploit or harm minors in any way;
7. To generate or disseminate verifiably false information and/or content with the purpose of harming others or influencing elections;
8. To generate or facilitate false online engagement, including fake reviews and other means of fake online engagement;
9. To intentionally defame, disparage or otherwise harass others;
10. To generate and/or disseminate malware (including ransomware) or any other content to be used for the purpose of harming electronic systems;
11. To generate or disseminate personal identifiable information with the purpose of harming others;
12. To generate or disseminate information (including images, code, posts, articles), and place the information in any public context (including –through the use of bot generated tweets), without expressly and conspicuously identifying that the information and/or content is machine generated;
13. To impersonate another individual without consent, authorization, or legal right;
14. To make high-stakes automated decisions in domains that affect an individual’s safety, rights or wellbeing (e.g., law enforcement, migration, medicine/health, management of critical infrastructure, safety components of products, essential services, credit, employment, housing, education, social scoring, or insurance);
15. In a manner that violates or disrespects the social ethics and moral standards of other countries or regions;
16. To perform, facilitate, threaten, incite, plan, promote or encourage violent extremism or terrorism;
17. For any use intended to discriminate against or harm individuals or groups based on protected characteristics or categories, online or offline social behavior or known or predicted personal or personality characteristics;
18. To intentionally exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
19. For military purposes;
20. To engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or other professional practices.
[1] https://github.com/Tencent-Hunyuan/HunyuanWorld-Voyager/blob...Or, those countries are trying to regulate AI.
Hard to feel bad for EU/UK. They tried their best to remain relevant, but lost in the end (talent, economy, civil rights).
Isn't fine-tuning a heck of a lot cheaper?
Just training on new data moves a model away from its previous behavior, to an unpredictably degree.
You can’t even test for the change without the original data.
> Also, there is no training data, which would be the "preferred form" of modification.
This is not open source because the license is not open source. The second line is not correct, tho. "Preferred form" of modification are weights, not data. Data is how you modify those weights.
I think at this point, open source is practically shorthand for weights available
Available to the world except the European Union, the UK, and South Korea
Not sure what led to that choice. I'd have expected either the U.S. & Canada to be in there, or not these. 3. DISTRIBUTION.
[...]
c. You are encouraged to: (i) publish at least one technology introduction blogpost or one public statement expressing Your experience of using the Tencent HunyuanWorld-Voyager Works; and (ii) mark the products or services developed by using the Tencent HunyuanWorld-Voyager Works to indicate that the product/service is “Powered by Tencent Hunyuan”; [...]
What's that doing in the license? What's the implications of a license-listed "encouragement"?A more plausible explanation is the requirements and obligations of those markets are ambiguous or open-ended in such a way that they cannot be meaningfully limited by a license, per the lawyers they retain to create things like licenses. Lawyers don’t like vague and uncertain risk, so they advised the company to reduce their risk exposure by opting out of those markets.
It's the EU AI act. I've tried their cute little app a week ago, designed to let you know if you comply, what you need to report and so on. I got a basically yes, but likely no, still have to register to bla-bla and announce yak-yak and do the dooby-doo, after selecting SME - open source - research - no client facing anything.
It was a mess when they proposed it, it was said to be better while they were working on it, turns out to be as unclear and as bureaucratic now that it's out.
Start on the right, and click through the options. At the end you'll get a sort of assessment of what you need to do.
I can literally walk in to scenes I shot on my Nikon D70 in 2007 and they, and the people, look real.
The linked Github page has a comparison with other world models...
I wonder if you could loop back the last frame of each video to extend the generated world further. Creating a kind of AI fever dream
Ideally based on FOSS models.
Lidar is direct measurement
mingtianzhang•2h ago
Fixed question: Thanks a lot for the feedback that human perception is not 2D. Let me rephrase the question: since all the visual data we see on computers can be represented as 2D images (indexed by time, angle, etc.), and we have many such 2D datasets, do we still need to explicitly model the underlying 3D world?
AIPedant•2h ago
And of course it really makes more sense to say human perception is 3+1-dimensional since we perceive the passage of time.
[1] https://en.wikipedia.org/wiki/Proprioception
WithinReason•1h ago
soulofmischief•1h ago
WithinReason•55m ago
None of these world models have explicit concepts of depth or 3D structure, and adding it would go against the principle of the Bitter Lesson. Even with 2 stereo captures there is no explicit 3D structure.
2OEH8eoCRo0•1h ago
There are other sensors as well. Is the inner ear a 2d sensor?
AIPedant•1h ago
reactordev•1h ago
AIPedant•1h ago
a) In a technical sense the actual receptors are 1D, not 2D. Perhaps some of them are two dimensional, but generally mechanical touch is about pressure or tension in a single direction or axis.
b) The rods and cones in your eyes are also 1D receptors but they combine to give a direct 2D image, and then higher-level processing infers depth. But touch and proprioception combine to give a direct 3D image.
Maybe you mean that the surface of the skin is two dimensional and so is touch? But the brain does not separate touch on the hand from its knowledge of where the hand is in space. Intentionally confusing this system is the basis of the "rubber hand illusion" https://en.wikipedia.org/wiki/Body_transfer_illusion
echelon•1h ago
Many of our signals are "on" and are instead suppressed by detection. Ligand binding, suppression, the signalling cascade, all sorts of encoding, ...
In any case, when all of our senses are integrated, we have rich n-dimensional input.
- stereo vision for depth
- monocular vision optics cues (shading, parallax, etc.)
- proprioception
- vestibular sensing
- binaural hearing
- time
I would not say that we sense in three dimensions. It's much more.
[1] https://en.m.wikipedia.org/wiki/G_protein-coupled_receptor
imtringued•1h ago
hambes•1h ago
rubzah•1h ago
__alexs•1h ago
yeoyeo42•1h ago
supermatt•1h ago
glitchc•1h ago
KaiserPro•1h ago
I'm not entirely convinced that this isn't one of those, or if its not it sure as shit was trained on one.
reactordev•1h ago