If you have a predictor/compressor LLM which was trained on all the movies in the world, would that not also be infringement?
An LLM is (or can be used) as a compression algorithm, but it is not compressed data. It is possible to have an overfit algorithm exactly predict (or reproduce) an output, but it’s not possible for one to reproduce all the outputs due to the pigeonhole principle.
To reiterate - LLMs are not compressed data.
If you're interested in this, it's a good idea reading about the Hutter prize (https://en.wikipedia.org/wiki/Hutter_Prize) and going from there.
In general, lossless compression works by predicting the next (letter/token/frame) and then encoding the difference from the prediction in the data stream succinctly. The better you predict, the less you need to encode, the better you compress.
The flip side of this is that all fields of compression have a lot to gain from progress in AI.
Fabrice Bellard's nncp (mentioned in a different comment) leads.
His comment immediately after describes exactly what happened:
> Even before it has ceased to exists, the MPEG engine had run out of steam – technology- and business wise. The same obscure forces that have hijacked MPEG had kept it hostage to their interests impeding its technical development and keeping it locked to outmoded Intellectual Property licensing models delaying market adoption of MPEG standards. Industry has been strangled and consumers have been deprived of the benefits of new technologies. From facilitators of new opportunities and experiences, MPEG standards have morphed from into roadblocks.
Big companies abused the setup that he was responsible for. Gentlemen's agreements to work together for the benefit of all got gamed into patent landmines and it happened under his watch.
Even many of the big corps involved called out the bullshit, notably Steve Jobs refusing to release a new Quicktime till they fixed some of the most egregious parts of AAC licencing way back in 2002.
https://www.zdnet.com/article/apple-shuns-mpeg-4-licensing-t...
It was sweet to see “over the Net”…
https://theonion.com/new-5-000-multimedia-computer-system-do...
Today though, the mocking doesn’t make sense and is confusing. I haven’t ever owned a TV.
(and it really was v.92; I still have the double-bong towards the end of the handshake emblazoned in my memory)
Though using the codecs and hardware of that time I doubt the quality at even that size would be great. Compare an old 349MB (sized to fit two on a CD-R/-RW, likely 480p though smaller wasn't uncommon) cap of a Stargate episode picked up in the early/mid 20XXs to a similarly sized file compressed using h265 or even h264 on modern hardware.
H.265 or H.264 would absolutely crush Xvid for compressing HD content, both in size and quality.
lol
If you went to blockbuster, you could move 4.7 gb to your home in half the time (unless your family was involved in choosing the movie which would slow you down)
I also remember when they went through and re-encoded all of the videos so they could play on the original model iPhone.
More context for this: Chiariglione has been extremely vocal that FRAND patent royalties are entirely necessary for the development of video compression tools, and believes royalty-free standards outpacing the ones that cost money represents the end of innovation in video codecs.
To be clear, Chiariglione isn't opposed to royalty-free standards at all, he just wants them to be deliberately worse so that people who need better compression will pay independent researchers for it. His MPEG actually wound up trying to make such a standard: IVC. You've never heard of MPEG IVC because Samsung immediately claimed ownership over it and ISO patent policy does not allow MPEG to require notice of what specific patents to remove so long as the owner agrees to negotiate a license with a patent pool.
You might think at this point that Chiariglione is on the side of the patent owners, but he's actually not. In fact, it's specifically those patent owners that pushed him out of MPEG.
In the 90s, patent owners were making bank off MPEG-2 royalties, but having trouble monetizing anything newer. A patent pool never actually formed for H.263, and the one for MPEG-4 couldn't agree on a royalty free rate for Internet streaming[0]. H.264 practically is royalty free for online video, but that only happened because Google bought On2[1] and threatened to make YouTube exclusively serve VP8. The patent owners very much resent this state of affairs and successfully sabotaged efforts at MPEG to make dedicated royalty-free codecs.
The second and more pressing issue (to industry, not to us) is the fact that H.265 failed to form a single patent pool. There's actually three of them, thanks to skulduggery by Access Advance to force people to pay for the same patent license twice by promising a sweetheart licensing deal[2] to Samsung. I'm told H.266 is even more insane, mostly because Access Advance is forcing people to buy licenses in a package deal to cover up the fact that they own very little of H.266.
Chiariglione is only pro-patent-owner in the narrow sense that he believes research needs to be 'paid for'. His attempt to keep patent owners honest got him sidelined and marginalized in ISO, which is why he left. He's actually made his own standards organization, with blackjack and hookers^Wartificial intelligence. MPAI's patent policy actually requires companies agree to 'framework licenses' - i.e. promise to actually negotiate with MPAI's own patent pool specifically. No clue if they've actually shipped anything useful.
Meanwhile, the rest of the Internet video industry coalesced around Google and Xiph's AV1 proposal. They somehow manage to do without direct royalty payments for AV1, which to me indicates that this research didn't need to be 'paid for' after all. Though, the way Chiariglione talks about AV1, you'd think it's some kind of existential threat to video encoding...
[0] Practically speaking, this meant MPEG-4 ASP was predominantly used by pirates, as legit online video sites that worked in browsers were using Flash based players, and Flash only supported H.263 and VP6.
[1] The company that made VP3 (Theora) and VP6
[2] The idea is that Samsung and other firms are "net implementer" companies. They own some of H.265, but they need to license the rest of it from MPEG-LA. So Access Advance promised those companies a super-low rate on the patents they need if they all pulled out of MPEG-LA, and they make it up by overcharging everyone else, including making them pay extra if they'd already gotten licenses from MPEG-LA before the Access companies pulled out of it.
Copyright is cancer. The faster AI industry is going to run it into the ground, the better.
Or is it MPEG LA? https://wiki.endsoftwarepatents.org/wiki/MPEG_LA
I remember this same guy complaining investments in the MPEG extortionist group would disappear because they couldn't fight against AV1.
He was part of a patent Mafia is is only lamenting he lost power.
Hypocrisy in its finest form.
https://blog.chiariglione.org/a-crisis-the-causes-and-a-solu...
He is not a coder, not a researcher, he is only part of the worst game there is in this industry: a money maker from patents and "standards" you need to pay for to use, implement or claim compatibility.
> At long last everybody realises that the old MPEG business model is now broke
And the entire post is about how dysfunctional MPEG is and how AOM rose to deal with it. It is tragic to waste so much time and money only to produce nothing. He's criticizing the MPEG group and their infighting. He's literally criticizing MPEG's licensing model and the leadership of the companies in MPEG. He's an MPEG member saying MPEG's business model is broken yet no one has a desire to fix it, so it will be beaten by a competitor. Would you not want to see your own organization reform rather than die?
Reminder AOM is a bunch of megacorps with profit motive too, which is why he thinks this ultimately leads to stalled innovation:
> My concerns are at a different level and have to do with the way industry at large will be able to access innovation. AOM will certainly give much needed stability to the video codec market but this will come at the cost of reduced if not entirely halted technical progress. There will simply be no incentive for companies to develop new video compression technologies, at very significant cost because of the sophistication of the field, knowing that their assets will be thankfully – and nothing more – accepted and used by AOM in their video codecs.
> Companies will slash their video compression technology investments, thousands of jobs will go and millions of USD of funding to universities will be cut. A successful “access technology at no cost” model will spread to other fields.
Money is the motivator. Figuring out how to reward investment in pushing the technology forward is his concern. It sounds like he is open to suggestions.
I don't think he fully considered the motivations of Alliance members like Google (YouTube), Meta and Netflix and the lengths they'll go to optimize operational costs of delivering content to improve their bottom line.
He first points out that a royalty-free format was actually better than the patent-pending alternative that he was responsible for pushing.
In the end, he concludes that the that the progress of video compression would stop if developers can't make money from patents, providing a comparison table on codec improvements that conveniently omits the aforementioned royalty-free code being better than the commercial alternatives pushed by his group.
Besides the above fallacy, the article is simply full of boasting about his own self-importance and religious connotations.
DCVC-RT (https://github.com/microsoft/DCVC) - A deep learning based video codec claims to deliver 21% more compression than h266.
One of the compelling edge AI usecases is to create deep learning based audio/video codecs on consumer hardwares.
One of the large/enterprise AI usecases is to create a coding model that generates deep learning based audio/video codecs for consumer hardwares.
This makes zero sense, right? Even if this was applicable, why would it need a standard? There is no interoperability between game servers of different games
Maybe these sorts of handshake agreements and industry collaboration were necessary to get things rolling in 198x. If so, then I thank the MPEG group for starting that work. But by 2005 or so when DivX and XviD and h264 were heating up, it was time to move beyond that model towards open interoperability.
And, boy howdy, they did.
MPEG was also joint with the video conferencing standards group within the CCITT (now International Telecommunications Union), which generally required FRAND declarations from patent holders.
My recollection is that MPEG-LA was set up as a clearing house so that implementers could go to one licensing organization, rather than negotiating with each patent owner individually.
All the patents for MPEG 1 and MPEG 2 must be expired by now.
Besides patent gridlock, there is a fundamental economic problem with developing new video coding algorithms. It's very difficult to develop an algorithm that will halve the bit rate for the same quality, to get it implemented in hardware products an software, and to introduce it broadly in the existing video services infrastructure. Plus, doubling the compression is likely to more than double the processing required. On the other hand, within a couple of years the network engineers will double the bit rate for the same cost, and the storage engineers will double the storage for the same cost. They, like processing, follow their own Moore's Law.
So reducing the cost by improving codecs is more expensive and takes more effort and time than just waiting for the processor, storage and networking cost reductions. At least that's been true over the 3 decades since MPEG 2.
wheybags•2d ago
jbverschoor•2d ago
wheybags•2d ago
egeozcan•2d ago
Taek•2d ago
jbverschoor•2d ago
Av1 for 7
The problem is every platform wants to force their own codec, and get earn royalties from the rest of the world.
They literally sabotaging it. Jxl support even got removed from chrome.
Investment in adopting in software is next to 0.
In hardware it’s a different story, and I’m not sure to what extent which codec can be properly accelerated
TiredOfLife•2d ago
rs186•2d ago
mike_hearn•2d ago
wheybags•2d ago
newsclues•2d ago
Audio and video codecs, document formats like PDF, are all foundational to computing and modern life from government to business, so there is a great incentive to make it all open, and free.
oblio•2d ago
Basically MBA drool material.
newsclues•2d ago
mike_hearn•2d ago
yxhuvud•2d ago
master-lincoln•2d ago
immibis•2d ago
newsclues•2d ago
But education receives a lot of funding from the government.
I think academia should build open source technology (that people can commercialize on their own with the expertise).
Higher education doesn’t need to have massive endowments of real estate and patent portfolio to further educ… administration salaries and vanity building projects.
Academia can serve the world with technology and educated minds.
nullc•2d ago
My expectation from experience when implementing something from a DSP paper is that the result will be unreproducable without contacting the authors for some undisclosed table of magic constants. After obtaining it, the results may match but only for the test images they reported results on. Results on anything else will be much worse.
Also it's normal for techniques from the literature to have computational/memory bandwidth costs two orders of magnitude greater than justified for even their (usually exaggerated) stated levels of performance.
And then their comparison points are almost always inevitably implemented so naively as to make the comparison useless.
It's always difficult because improvements in this domain (like many other engineering domains) are significantly about tradeoffs ... and tradeoffs are difficult to weigh in a pure research environment without the context of concrete applications. They're also difficult to weigh with implementation cleverness having such a big impact particularly since industry heavily drains academia of naturally skilled software engineers.
And as other comments have pointed out, academia is in some sense among the worst of the patent abusers. They'll often develop technology just far enough to lay patent mines around the field, but not far enough to produce something useful out of it. The risk that you spend the significant effort to turn a concept into something usable only to have some patent holder show up with a decade old patent to shake you down is a big incentive against investment.
thinkingQueen•2d ago
And regarding ”royalty-free” codecs please read this https://ipeurope.org/blog/royalty-free-standards-are-not-fre...
bjoli•2d ago
blendergeek•2d ago
Unsurprisingly companies that are losing money because their rent-seeking on media codecs is now over will spread FUD [0] about royalty free codecs.
[0] https://en.wikipedia.org/wiki/Fear%2C_uncertainty_and_doubt
chrismorgan•2d ago
cnst•2d ago
Such a huge catch that the companies that offer you a royalty-free license, only do so on the condition that you're not gonna turn around and abuse your own patents against them!
How exactly is that a bad thing?
How is it different from the (unwritten) social contracts of all humans and even of animals? How is it different from the primal instincts?
pornel•2d ago
In the early days of MPEG codec development was difficult, because most computers weren't capable of encoding video, and the field was in its infancy.
However, by the end of '00s computers were fast enough for anybody to do video encoding R&D, and there was a ton of research to build upon. At that point MPEG's role changed from being a pioneer in the field to being an incumbent with a patent minefield, stopping others from moving the field forward.
mike_hearn•2d ago
The point is, if there had been no incentives to develop codecs, there would have been no MPEG. Other people would have stepped into the void and sometimes did, e.g. RealVideo, but without legal IP protection the codecs would just have been entirely undocumented and heavily obfuscated, relying on tamper-proofed ASICs much faster.
badsectoracula•2d ago
strogonoff•2d ago
immibis•2d ago
strogonoff•2d ago
tsimionescu•2d ago
The browsers are an interesting case. Neither Chrome nor Edge are really open source, despite Chromium being so, and they are both funded by advertising and marketing money from huge corporations. Safari is of course closed source. And Firefox is an increasingly tiny runner-up. So I don't know if I'd really count Chromium as a FLOSS success story.
Overall, I don't think FLOSS has had the kind of effect that many activists were going for. What has generally happened is that companies building software have realized that there is a lot of value to be found in treating FLOSS software as a kind of barter agreement between companies, where maybe Microsoft helps improve Linux for the benefit of all, but in turn it gets to use, say, Google's efforts on Chromium, and so on. The fact that other companies then get to mooch off of these big collaborations doesn't really matter compared to getting rid of the hassle of actually setting up explicit agreements with so many others.
_alternator_•2d ago
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4693148
tsimionescu•2d ago
sitkack•2d ago
The entire internet, end to end, runs on FLOSS.
mike_hearn•2d ago
tsimionescu•2d ago
It's still almost impossible to have a digital life that doesn't involve significant use of proprietary software, and the vast majority of users do their computing almost exclusively through proprietary software. The fact that this proprietary software is a bit of glue on top of a bunch of FLOSS libraries possibly running on a FLOSS kernel that uses FLOSS libraries to talk to a FLOSS router doesn't really buy much actual freedom for the end users. They're still locked in to the proprietary software vendors just as much as they were in the 90s (perhaps paying with their private data instead of actual money).
immibis•2d ago
thwarted•2d ago
> This is still the argument for software copyright.
And open source licensing is based on and relies on copyright. Patents and copyright are different kinds of intellectual property protection and incentivize different things. Copyright in some sense encourages participation and collaboration because you retain ownership of your code. The way patents are used discourages participation and collaboration.
zozbot234•2d ago
bigstrat2003•2d ago
sitkack•2d ago
The firewall of patents exist precisely because digital video is a way to shakedown the route media would have to travel to get to the end user.
Codecs are not, "harder than" compilers, yet the field of compilers was blown completely open by GCC. Capital didn't see the market opportunity because there wasn't the same possibility of being a gatekeeper for so much attention and money.
The patents aren't because it is difficult, the patents are there because they can extract money from the revenue streams.
mike_hearn•2d ago
Modern video codecs are harder than compilers. You have to have good ASIC development expertise to do them right, for example, which you don't need for compilers. It's totally feasible for a single company to develop a leading edge compiler whereas you don't see that in video codecs, historically they've been collaborations.
pornel•2d ago
Hardware vendors don't benefit from the patent pools. They usually get nothing from them, and are burdened by having to pass per-unit licensing costs on to their customers.
It's true that designing an ASIC-friendly codec needs special considerations, and benefits from close collaboration with hardware vendors, but it's not magic. The general constraints are well-known to codec designers (in open-source too). The commercial incentives for collaboration are already there — HW vendors will profit from selling the chipsets or licensing the HW design.
The patent situation is completely broken. The commercial codecs "invent" coding features of dubious utility, mostly unnecessary tweaks on old stuff, because everyone wants to have their patent in the pool. It ends up being a political game, because the engineering goal is to make the simplest most effective codec, but the financial incentive is to approve everyone's patented add-ons regardless of whether they're worth the complexity or not.
Meanwhile everything that isn't explicitly covered by a patent needs to be proven to be 20 years old, and this limits MPEG too. Otherwise nobody can prove that there won't be any submarine patent that could be used to set up a competing patent pool and extort MPEG's customers.
So our latest-and-greatest codecs are built on 20-year-old ideas, with or without some bells and whistles added. The ASICs often don't use the bells and whistles anyway, because the extra coding features may not even be suitable for ASICs, and usually have diminishing returns (like 3x slower encode for 1% better quality/filesize ratio).
mafuy•2d ago
The only reason I can think of why you would say this is that nowadays we have good compiler infrastructure that works with many hardware architectures and it has become easy to create or modify compilers. But that's only due to the fact that it was so insanely complicated that it had to be redone from scratch to become generalizible, which led to LLVM and the subsequent direct and indirect benefits everywhere. That's the work of thousands of the smartest people over 30 years.
There is no way that a single company could develop a state of the art compiler without using an existing one. Intel had a good independent compiler and gave up because open source had become superior.
For what it's worth, look at the state of FPGA compilers. They are so difficult that every single one of them that exists is utter shit. I wish it were different.
mike_hearn•2d ago
Not only can they do it but some companies have done it several times. Look at Oracle: there's HotSpot's C2 compiler, and the Graal compiler. Both state of the art, both developed by one company.
Not unique. Microsoft and Apple have built many compilers alone over their lifespan.
This whole thing is insanely subjective, but that's why I'm making fun of the "unsubstantiated claim" bit. How exactly are you meant to objectively compare this?
cornholio•2d ago
As long as IP law continues in the same form, the alternative to that is completely closed agreements among major companies that will push their own proprietary formats and aggressively enforce their patents.
The fair world where everyone is free to create a new thing, improve upon the frontier codecs, and get a fair reward for their efforts, is simply a fantasy without patent law reform. In the current geopolitical climate, it's very very unlikely for nations where these developments traditionally happened, such as US and western Europe, to weaken their IP laws.
ZeroGravitas•2d ago
They didn't get people to agree on terms up front, they made the final codec with interlocking patents embedded from hundreds of parties and made no attempt to avoid random outsider's patents and then once it was done tried to come to a licence agreement when every minor patent holder had an effective veto on the resulting pool. That's how you end up with multiple pools plus people who own patents and aren't members of any of the pools. It's ridiculous.
My minor conspiracy theory is that if you did it right, then you'd basically end up with something close to open source codecs as that's the best overall outcome.
Everyone benefits from only putting in freely available ideas. So if you want to gouge people with your patents you need to mess this up and "accidentally" create a patent mess.
scotty79•2d ago
phkahler•2d ago
You can say that, but this discussion is in response to the guy who started MPEG and later shut it down. I don't think he'd say its harsh.
Taek•2d ago
Codec development is slow and expensive becuase you can't just release a new codec, you have to dance around patents.
mike_hearn•2d ago
voakbasda•2d ago
scottLobster•2d ago
For the remaining 99.99% of us, we have to negotiate for resources as best we can. That typically means maximizing shareholder value in exchange for a cut of the profits. Not all your labor needs to be vacuumed up, I make enough to support my family, live a relatively safe and comfortable life with some minor luxuries and likely a secure retirement. Better deal than most people get today.
DiggyJohnson•2d ago
scottLobster•2d ago
Regardless, why are you white-knighting for him? He made a moral argument about career choice, and I responded to said argument as someone who took the other side. This is a discussion board, we discuss things.
nullc•2d ago
Work inside patent driven development groups also suffers substantial complexity bloat because there is a huge incentive for each participant to get a patentable component into the standard in order to benefit from cross-licensing. Often these 'improvements' are insignificant or even a net loss (the cost of the bitstream to signal them on is greater than their improvement over any credible collection of material).
astrange•2d ago
rowanG077•2d ago
astrange•2d ago
rowanG077•2d ago
dylan604•2d ago
TheTon•2d ago
Sometimes there are hybrid coders that can use some of the resources on the chip and some shader code to handle new codecs or codec features after the fact, but you pay a power and performance penalty to use these.
astrange•1d ago
Of course, you can design it to be as flexible as you want; but at one end of that you just have a regular CPU again.
deadbabe•2d ago
account42•2d ago
ghm2199•2d ago
thinkingQueen•2d ago
Until the new codec comes to together all those small optimizations aren’t really worth much, so it’s a long term research project with potentially zero return on investement.
And yes, most of the small optimizations are patented, something that I’ve come to understand isnt’t viewed very favorably by most.
phkahler•2d ago
Codecs are like infrastructure not products. From cameras to servers to iPhones, they all have to use the same codecs to interoperate. If someone comes along with a small optimization it's hard enough to deploy that across the industry. If it's patented you've got another obstacle: nobody wants to pay the incremental cost for a small improvement (it's not even incremental cost once you've got free codecs, it's a complete hassle).
mike_hearn•2d ago
bsindicatr3•2d ago
Maybe you don’t remember the way that the gif format (there was no jpeg, png, or webp initially) had problems with licensing, and then years later having scares about it potentially becoming illegal to use gifs. Here’s a mention of some of the problems with Unisys, though I didn’t find info about these scares on Wikipedia’s GIF or Compuserve pages:
https://www.quora.com/Is-it-true-that-in-1994-the-company-wh...
Similarly, the awful history of digital content restriction technology in-general (DRM, etc.). I’m not against companies trying to protect assets, but data assets historically over all time are inherently prone to “use”, whether that use is intentional or unintentional by the one that provided the data. The problem has always been about the means of dissemination, not that the data itself needed to be encoded with a lock that anyone with the key or means to get/make one could unlock nor that it should need to call home, basically preventing the user from actually legitimately being able to use the data.
adzm•2d ago
The GIF page on wikipedia has an entire section for the patent troubles https://en.wikipedia.org/wiki/GIF#Unisys_and_LZW_patent_enfo...
mike_hearn•1d ago
tomrod•2d ago
(I know nothing about the legal side of all this, just remembering the time period of Ubuntu circa 2005-2008).
zappb•2d ago
notpushkin•2d ago
Source? I’ve seen Vorbis used in a whole bunch of places.
Notably, Spotify only used Vorbis for a while (still does, but also includes AAC now, for Apple platforms I think).
scott_w•2d ago
notpushkin•2d ago
scott_w•2d ago
nullc•2d ago
Personal music players had the issue that they had to have MP3 due to market forces, so offering Vorbis didn't save them any money. There were also some design decisions in Vorbis that made it a little more annoying to support on some very limited hardware than MP3.
scott_w•1d ago
notpushkin•1d ago
scott_w•1d ago
darkwater•2d ago
scott_w•2d ago
darkwater•2d ago
scott_w•1d ago
account42•2d ago
scott_w•1d ago
breve•2d ago
It's hard to get more mainstream than YouTube and Netflix.
account42•2d ago
lightedman•2d ago
No, just no. We've had free community codec packs for years before Google even existed. Anyone remember CCCP?
notpushkin•2d ago
leguminous•2d ago
tristor•2d ago
We started CCCP because at the time, anime fansubs were predominantly traded on P2P filesharing services like Kazaa, Gnutella, eDonkey, Direct Connect, and later Bittorrent. The most popular codec pack at the time was K-Lite / Kazaa Codec Pack which was a complete and utter mess, and specifically for fansubbing, it was hard to get subtitles to work properly unless they were hard embedded. Soft-subbing allowed for improvements, and there were a lot of improvements to subtitling in the fansubbing community over the years, one of the biggest came when the Matroska (MKV) container format came about, that allowed arbitrarily different formats/encodings to share a single media container, and the community shifted almost entirely to ASS formatted subtitles, but because an MKV could contain many different encodings, any given MKV file may play correctly or not on any given system. CCCP was intended to provide an authoritative, canonical, single-source way to play fansubbed anime correctly on Windows, and we achieved that objective.
But let's be clear, nobody involved was under any illusions that the MPEG-LA or any other license holders of for instance h264 were fans of our community or what we're doing. Anime fansubbing at all came out of piracy of foreign-language media into the English market via the Internet and P2P filesharing. None of us gave a shit, and the use of Soviet imagery in the CCCP was exactly a nod to the somewhat communist ideal that knowledge and access to media should be free, and that patent encumbering codecs and patenting software isn't just stupid, it's morally wrong. I still strongly feel software patents are evil.
Nonetheless, at no point was CCCP through it's life fully legal/licenses appropriately for usage, and effectively nobody cared, not even the licensing authorities, because the existence of these things made their licenses for encoders more valuable for companies producing media, as it was easier for actual people to consume.
cxr•2d ago
The release of VP3 as open source predates Google's later acquisition of On2 (2010) by nearly a decade.
mike_hearn•1d ago
zoeysmithe•2d ago
If that stuff worked better, linux would have failed entirely, instead near everyone interfaces with a linux machine probably hundreds if not thousands of times a day in some form. Maybe millions if we consider how complex just accessing internet services is and the many servers, routers, mirrors, proxies, etc one encounters in just a trivial app refresh. If not linux, then the open mach/bsd derivatives ios uses.
Then looking even previous to the ascent of linux, we had all manner of free/open stuff informally in the 70s and 80s. Shareware, open culture, etc that led to today where this entire medium only exists because of open standards and open source and volunteering.
Software patents are net loss for society. For profit systems are less efficient than open non-profit systems. No 'middle-man' system is better than a system that goes out of its way to eliminate the middle-man rent-seeker.
derf_•2d ago
I disagree. Video is such a large percentage of internet traffic and licensing fees are so high that it becomes possible for any number of companies to subsidize the development cost of a new codec on their own and still net a profit. Google certainly spends the most money, but they were hardly the only ones involved in AV1. At Mozilla we developed Daala from scratch and had reached performance competitive with H.265 when we stopped to contribute the technology to the AV1 process, and our team's entire budget was a fraction of what the annual licensing fees for H.264 would have been. Cisco developed Thor on their own with just a handful of people and contributed that, as well. Many other companies contributed technology on a royalty-free basis. Outside of AV1, you regularly see things like Samsung's EVC (or LC-EVC, or APV, or...), or the AVS series from the Chinese.... If the patent situation were more tenable, you would see a lot more of these.
The cost of developing the technology is not the limitation. I would argue the cost to get all parties to agree on a common standard and the cost to deploy it widely enough for people to rely on it is much higher, but people manage that on a royalty-free basis for many other standards.
mike_hearn•2d ago
H.264 was something like >90% of all video a few years ago and wasn't it free for streaming if the end user wasn't paying? IIRC someone also paid the fees for an open source version. There were pretty good licensing terms available and all the big players have used it extensively.
Anyway, my point was only that expecting Google to develop every piece of tech in the world and give it all away for free isn't a general model for tech development, whereas IP rights and patent pools are. The free ride ends the moment Google decide they need more profit, feel threatened in some way or get broken up by the government.
ZeroGravitas•2d ago
Not that the licencing of h.264 wasn't a mess too. You suggest it was free for web use but they originally only promised not to charge for free streaming up until 2015 and reserved the right to do so once it was embedded in the web. Pressure from Google/Xiph/etc's WebM project forced them to promise not to enforce it after that point either.
https://www.wired.com/2010/08/mpeg-la-extends-web-video-lice...
Cisco paid for a binary version of a decoder that could be downloaded by Firefox as a plugin. They could only do so because of a loophole around a cap in fees that they were already hitting so it wouldn't cost them more to supply to every Firefox user.
mike_hearn•1d ago
thinkingQueen•2d ago
Daala was never meant to be widely adopted in its original form — its complexity alone made that unlikely. There’s a reason why all widely deployed codecs end up using similar coding tools and partitioning schemes: they’re proven, practical, and compatible with real-world hardware.
As for H.265, it’s the result of countless engineering trade-offs. I’m sure if you cherry-picked all the most experimental ideas proposed during its development, you could create a codec that far outperforms H.265 on paper. But that kind of design would never be viable in a real-world product — it wouldn’t meet the constraints of hardware, licensing, or industry adoption.
Now the following is a more general comment, not directed at you.
There’s often a dismissive attitude toward the work done in the H.26x space. You can sometimes see this even in technical meetings when someone proposes a novel but impractical idea and gets frustrated when others don’t immediately embrace it. But there’s a good reason for the conservative approach: codecs aren’t just judged by their theoretical performance; they have to be implementable, efficient, and compatible with real-world constraints. They also have to somehow make financial sense and cannot be given a way without some form of compensation.
weinzierl•2d ago
I don't know about video codecs but MP3 (also part of MPEG) came out of Fraunhofer and was paid by German tax money. It should not have been patented in the first place (and wasn't in Germany).
thinkingQueen•2d ago
Not to mention the computer clusters to run all the coding sims, thousands and thousands of CPUs are needed per research team.
People who are outside the video coding industry do not understand that it is an industry. It’s run by big companies with large R&D budgets. It’s like saying ”where would we be with AI if Google, OpenAI and Nvidia didn’t have an iron grip”.
MPEG and especially JVET are doing just fine. The same companies and engineers who worked on AVC, HEVC and VVC are still there with many new ones especially from Asia.
MPEG was reorganized because this Leonardo guy became an obstacle, and he’s been angry about ever since. Other than that I’d say business as usual in the video coding realm.
roenxi•2d ago
We'd be where we are. All the codec-equivalent aspects of their work are unencumbered by patents and there are very high quality free models available in the market that are just given away. If the multimedia world had followed the Google example it'd be quite hard to complain about the codecs.
thinkingQueen•2d ago
The top AI companies use very restrictive licenses.
I think it’s actually the other way around and AI industry will actually end up following the video coding industry when it comes to patents, royalties, licenses etc.
roenxi•2d ago
If it is a matter of laws, China would just declare the law doesn't count to dodge around the US chip sanctions. Which, admittedly, might happen - but I don't see how that could result in much more freedom than we already have now. Having more Chinese people involved is generally good for prices, but that doesn't have much to do with market structure as much as they work hard and do things at scale.
> The top AI companies use very restrictive licenses.
These models are supported by the Apache 2.0 license ~ https://openai.com/open-models/
Are they lying to me? It is hard to get much more permissive than Apache 2.
mike_hearn•2d ago
NVIDIA's advantage over AMD is largely in the drivers and CUDA i.e. their software. If it weren't for IP law or if NVIDIA had foolishly made their software fully open source, AMD could have just forked their PTX compiler and NVIDIAs advantage would never have been established. In turn that'd have meant they wouldn't have any special privileges at TSMC.
oblio•2d ago
rwmj•2d ago
(The answer is that most of the work would be done by companies who have an interest in video distribution - eg. Google - but don't profit directly by selling codecs. And universities for the more research side of things. Plus volunteers gluing it all together into the final system.)
thinkingQueen•2d ago
People don’t develop video codecs for fun like they do with software. And the reason is that it’s almost impossible to do without support from the industry.
eqvinox•2d ago
You wouldn't know if it had already happened, since such a codec would have little chance of success, possibly not even publication. Your proposition is really unprovable in either direction due to the circular feedback on itself.
bayindirh•2d ago
Hmm, let me check my notes:
Some of these guys have standards bodies as supporters, but in all cases, bigger groups formed behind them, after they made considerable effort. QOI and QOA is written by a single guy just because he's bored.For example, FLAC is a worst of all worlds codec for industry to back. A streamable, seekable, hardware-implementable, error-resistant, lossless codec with 8 channels, 32 bit samples, and up to 640KHz sample rate, with no DRM support. Yet we have it, and it rules consumer lossless audio while giggling and waving at everyone.
On the other hand, we have LAME. An encoder which also uses psycho-acoustic techniques to improve the resulting sound quality and almost everyone is using it, because the closed source encoders generally sound lamer than LAME in the same bit-rates. Remember, MP3 format doesn't have an reference encoder. If the decoder can read the file and it sounds the way you expect, then you have a valid encoder. There's no spec for that.
> Are you really saying that patents are preventing people from writing the next great video codec?
Yes, yes, and, yes. MPEG and similar groups openly threatened free and open codecs by opening "patent portfolio forming calls" to create portfolios to fight with these codecs, because they are terrified of being deprived of their monies.
If patents and license fees are not a problem for these guys, can you tell me why all professional camera gear which can take videos only come with "personal, non-profit and non-professional" licenses on board, and you have pay blanket extort ^H^H^H^H^H licensing fees to these bodies to take a video you can monetize?
For the license disclaimers in camera manuals, see [0].
[0]: https://news.ycombinator.com/item?id=42736254
Taek•2d ago
You don't *have* to add all the rigour. If you develop a new technique for video compression, a new container for holding data, etc, you can just try it out and share it with the technical community.
Well, you could, if you weren't afraid of getting sued for infringing on patents.
fires10•2d ago
unlord•2d ago
As someone who lead an open source team (of majority volunteers) for nearly a decade at Mozilla, I can tell you that people do work on video codecs for fun, see https://github.com/xiph/daala
Working with fine people from Xiph.Org and the IETF (and later AOM) on royalty free formats Theora, Opus, Daala and AV1 was by far the most fun, interesting and fulfilling work I've had as professional engineer.
tux3•2d ago
Actually, are Xiph people still involved in AVM? It seems like it's being developed a little bit differently than AV1. I might have lost track a bit.
scott_w•2d ago
Yes, that’s exactly what people are saying.
People are also saying that companies aren’t writing video codecs.
In both cases, they can be sued for patent infringement if they do.
Spooky23•2d ago
Look at data compression. Sperry/Univac controlled key patents and slowed down invention in the space for years. Was it in the interest of these companies or Unisys (their successor) to invest in compression development? Nope.
That’s by design. That moat of exclusivity makes it difficult to compensate people to come up with novel inventions in-scope or even adjacent to the patent. With codecs, the patents are very granular and make it difficult for anyone but the largest players with key financial interests to do much of anything.
raverbashing•2d ago
The question is more, "who would write the HTTP spec?" except instead of sending text back and forth you need experts in compression, visual perception, video formats, etc
rwmj•2d ago
mike_hearn•2d ago
Our industry has come to take Google's enormous corporate generosity for granted, but there was zero need for it to be as helpful to open computing as it has been. It would have been just as successful with YouTube if Chrome was entirely closed source and they paid for video codec licensing, or if they developed entirely closed codecs just for their own use. In fact nearly all Google's codebase is closed source and it hasn't held them back at all.
Google did give a lot away though, and for that we should be very grateful. They not only released a ton of useful code and algorithms for free, they also inspired a culture where other companies also do that sometimes (e.g. Llama). But we should also recognize that relying on the benevolence of 2-3 idealistic billionaires with a browser fetish is a very time and place specific one-off, it's not a thing that can be demanded or generalized.
In general, R&D is costly and requires incentives. Patent pools aren't perfect, but they do work well enough to always be defining the state-of-the-art and establish global standards too (digital TV, DVDs, streaming.... all patent pool based mechanisms).
breve•2d ago
It's not a social mechanism. And it's not generosity.
Google pushes huge amounts of video and audio through YouTube. It's in Google's direct financial interest to have better video and audio codecs implemented and deployed in as many browsers and devices as possible. It reduces Google's costs.
Royalty-free video and audio codecs makes that implementation and deployment more likely in more places.
> Patent pools aren't perfect
They are a long way from perfect. Patent pools will contact you and say, "That's a nice codec you've got there. It'd be a shame if something happened to it."
Three different patent pools are trying to collect licencing fees for AV1:
https://www.sisvel.com/licensing-programmes/audio-and-video-...
https://accessadvance.com/licensing-programs/vdp-pool/
https://www.avanci.com/video/
chubot•2d ago
You might be misunderstanding that almost all of Linux development is funded by the same kind of companies that fund MPEG development.
It's not "engineers in their basement", and never was
https://www.linuxfoundation.org/about/members
e.g. Red Hat, Intel, Oracle, Google, and now MICROSOFT itself (the competitive landscape changed)
This has LONG been the case, e.g. an article from 2008:
https://www.informationweek.com/it-sectors/linux-contributor...
2017 Linux Foundation Report: https://www.linuxfoundation.org/press/press-release/linux-fo...
Roughly 15,600 developers from more than 1,400 companies have contributed to the Linux kernel since the adoption of Git made detailed tracking possible
The Top 10 organizations sponsoring Linux kernel development since the last report include Intel, Red Hat, Linaro, IBM, Samsung, SUSE, Google, AMD, Renesas and Mellanox
---
curl does seem to be an outlier, but you still need to answer the question: "Who would develop video codecs?" You can't just say "Linux appeared out of thin air", because that's not what happened.
Linux has funding because it serves the interests of a large group of companies that themselves have a source of revenue.
(And to be clear, I do not think that is a bad thing! I prefer it when companies write open source software. But it does skew the design of what open source software is available.)
rwmj•2d ago
cwizou•2d ago
It kinda did though https://en.wikipedia.org/wiki/Linux#Creation !
The corporate support you mentioned arrived years after that.
chubot•2d ago
But creation only counts for so much -- without support, Linux could still be a hobby project that "won't be big and professional like GNU"
I'm saying Linux didn't APPEAR out of thin air, or at least it's worth looking deeper into the reasons why. "Appearing" to the general public, i.e. making widely useful software, requires a large group of people over a sustained time period, like 10 years.
----
i.e. Right NOW there are probably hundreds of projects like Linux that you haven't heard of, which don't necessarily align with funders
I would actually make the comparison to GNU -- GNU is a successful project, but there are various efforts underneath it that kind of languish.
Look at High Priority Free Software Projects - https://www.fsf.org/campaigns/priority-projects/
- Decentralization, federation, and self-hosting
- Free drivers, firmware, and hardware designs
- Real-time voice and video chat
- Internationalization of free software
- Security by and for free software
- Intelligent personal assistant
I'm saying that VIDEO CODECS might be structurally more similar to these projects, than they are to the Linux kernel.
i.e. making a freely-licensed kernel IS aligned with Red Hat, Intel, Google, but making an Intelligent Personal Assistant is probably not.
Somebody probably ALREADY created a good free intelligent personal assistant (or one that COULD BE as great as Linux), but you never heard of them. Because they don't have hundreds of companies and thousands of people aligned with them.
cwizou•8h ago
It took a while (and a lot of pain) to get a lot of driver vendors to come fully into the project, yet Linux was already gaining a bunch of traction at that time (say last half of 90s).
I'll give you that Intel was always more or less a good actor though! But Google didn't exist when Linux already mattered. And when Google was created, they definitely benefited a lot from it, basing much of their infra on it.
Marketing needs (and laywer approval) can bring support faster than most things. Opus for audio is a good example of that too.
mschuster91•2d ago
How about governments? Radar, Laser, Microwaves - all offshoots of US military R&D.
There's nothing stopping either the US or European governments from stepping up and funding academic progress again.
rs186•2d ago
If we did that we would probably be stuck with low-bitrate 720p videos on YouTube.
mschuster91•2d ago
Give universities the money, let them care about the details.
rs186•2d ago
University research labs, usually with a team of no more than 10 people (at most 20), are good at producing early, proof-of-concept work, but not incredibly complex projects like creating an actual codec. They are not known for producing polished, mature commerical products that can be immediately used in the real world. They don't have the resources or the incentive to do so.
mschuster91•1d ago
Of course they have. Guess how MP3 was developed - an offshoot of the German Fraunhofer Institute and FAU Nürnberg-Erlangen, amongst others.
The fact that no one seems to even be able to imagine how funding anything from the government could even work (despite that era being just a few decades ago) is shocking.
[1] https://de.wikipedia.org/wiki/MP3
somethingsome•2d ago
wmf•2d ago
fidotron•2d ago
Reason077•2d ago
Has AV1 solved this, to some extent? Although there are patent claims against it (patents for technologies that are fundamental to all the modern video codecs), it still seems better than the patent & licensing situation for h264 / h265.
afroboy•2d ago
Just check pirated releases of TV shows and movies.
philistine•2d ago
doublerabbit•2d ago
riedel•2d ago
[0] https://mpeg.chiariglione.org/standards/mpeg-7/reference-sof...
EDIT: Here is the Wikipedia page of BiM which evidently made it even into an ISO Standard [1]
[1] https://en.m.wikipedia.org/wiki/BiM
bokchoi•2d ago
riedel•1d ago
fweimer•2d ago
THIS PRODUCT IS LICENSED UNDER THE AVC PATENT PORTFOLIO LICENSE FOR THE PERSONAL AND NON-COMMERCIAL USE OF A CONSUMER TO (I) ENCODE VIDEO IN COMPLIANCE WITH THE AVC STANDARD ("AVC VIDEO") AND/OR (II) DECODE AVC VIDEO THAT WAS ENCODED BY A CONSUMER ENGAGED IN A PERSONAL AND NON-COMMERCIAL ACTIVITY AND/OR WAS OBTAINED FROM A VIDEO PROVIDER LICENSED TO PROVIDE AVC VIDEO. NO LICENSE IS GRANTED OR SHALL BE IMPLIED FOR ANY OTHER USE. ADDITIONAL INFORMATION MAY BE OBTAINED FROM MPEG LA, L.L.C. SEE HTTP://WWW.MPEGLA.COM
It's unclear whether this license covers videoconferencing for work purposes (where you are paid, but not specifically to be on that call). It seems to rule out remote tutoring.
MPEG LA probably did not have much choice here because this language requirement (or language close to it) for outgoing patent licenses is likely part of their incoming patent license agreements. It's probably impossible at this point to renegotiate and align the terms with how people actually use video codecs commercially today.
But it means that you can't get a pool license from MPEG LA that covers commercial videoconferencing, you'd have to negotiate separately with the individual patent holders.
wmf•2d ago