frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Infinite Pixels

https://meyerweb.com/eric/thoughts/2025/08/07/infinite-pixels/
59•OuterVale•48m ago•1 comments

Baltimore Assessments Accidentally Subsidize Blight–and How We Can Fix It

https://progressandpoverty.substack.com/p/how-baltimore-assessments-accidentally
24•surprisetalk•1h ago•1 comments

Arm Desktop: x86 Emulation

https://marcin.juszkiewicz.com.pl/2025/07/22/arm-desktop-emulation/
14•PaulHoule•1h ago•0 comments

New AI Coding Teammate: Gemini CLI GitHub Actions

https://blog.google/technology/developers/introducing-gemini-cli-github-actions/
100•michael-sumner•4h ago•47 comments

Outdated Software, Nationwide Chaos: United Grounds Flights After Meltdown

https://allchronology.com/2025/08/07/outdated-software-nationwide-chaos-united-airlines-grounds-flights-after-system-meltdown/
5•rectang•5m ago•1 comments

We replaced passwords with something worse

https://blog.danielh.cc/blog/passwords
532•max__dev•11h ago•419 comments

How AI Conquered the US Economy: A Visual FAQ

https://www.derekthompson.org/p/how-ai-conquered-the-us-economy-a
48•rbanffy•3h ago•35 comments

Show HN: Stasher – Burn-after-read secrets from the CLI, no server, no trust

https://github.com/stasher-dev/stasher-cli
31•stasher-dev•2h ago•22 comments

Leonardo Chiariglione: “I closed MPEG on 2 June 2020”

https://leonardo.chiariglione.org/
154•eggspurt•3h ago•112 comments

GoGoGrandparent (YC S16) Is Hiring Back End and Full-Stack Engineers

1•davidchl•2h ago

An LLM does not need to understand MCP

https://hackteam.io/blog/your-llm-does-not-care-about-mcp/
35•gethackteam•1h ago•29 comments

Claude Code IDE integration for Emacs

https://github.com/manzaltu/claude-code-ide.el
699•kgwgk•1d ago•235 comments

Cracking the Vault: How we found zero-day flaws in HashiCorp Vault

https://cyata.ai/blog/cracking-the-vault-how-we-found-zero-day-flaws-in-authentication-identity-and-authorization-in-hashicorp-vault/
154•nihsy•6h ago•58 comments

The Whispering Earring (Scott Alexander)

https://croissanthology.com/earring
38•ZeljkoS•3h ago•5 comments

PastVu: Historical Photographs on Current Maps

https://pastvu.com/?_nojs=1
17•lapetitejort•2d ago•1 comments

Show HN: Aura – Like robots.txt, but for AI actions

https://github.com/osmandkitay/aura
16•OsmanDKitay•1d ago•14 comments

AI Ethics is being narrowed on purpose, like privacy was

https://nimishg.substack.com/p/ai-ethics-is-being-narrowed-on-purpose
93•i_dont_know_•2h ago•57 comments

Running GPT-OSS-120B at 500 tokens per second on Nvidia GPUs

https://www.baseten.co/blog/sota-performance-for-gpt-oss-120b-on-nvidia-gpus/
204•philipkiely•11h ago•129 comments

Synthetic Biology for Space Exploration

https://www.nature.com/articles/s41526-025-00488-7
6•PaulHoule•2d ago•0 comments

Splatshop: Efficiently Editing Large Gaussian Splat Models

https://momentsingraphics.de/HPG2025.html
17•ibobev•3d ago•0 comments

Project Hyperion: Interstellar ship design competition

https://www.projecthyperion.org
316•codeulike•17h ago•242 comments

Debounce

https://developer.mozilla.org/en-US/docs/Glossary/Debounce
93•aanthonymax•2d ago•47 comments

Children's movie leads art historian to long-lost Hungarian masterpiece (2014)

https://www.theguardian.com/world/2014/nov/27/stuart-little-art-historian-long-lost-hungarian-masterpiece
30•how-about-this•3d ago•4 comments

Fastmail breaks UI in production

https://twitter.com/licyeus/status/1953438985381974493
22•blux•51m ago•15 comments

Did Craigslist decimate newspapers? Legend meets reality

https://www.poynter.org/business-work/2025/did-craigslist-kill-newspapers-poynter-50/
36•zdw•3d ago•32 comments

Show HN: Kitten TTS – 25MB CPU-Only, Open-Source TTS Model

https://github.com/KittenML/KittenTTS
882•divamgupta•1d ago•340 comments

Maybe we should do an updated Super Cars

https://spillhistorie.no/2025/07/31/maybe-we-should-do-an-updated-version/
6•Kolorabi•1h ago•1 comments

Rules by which a great empire may be reduced to a small one (1773)

https://founders.archives.gov/documents/Franklin/01-20-02-0213
210•freediver•14h ago•133 comments

A candidate giant planet imaged in the habitable zone of α Cen A

https://arxiv.org/abs/2508.03814
102•pinewurst•12h ago•34 comments

Litestar is worth a look

https://www.b-list.org/weblog/2025/aug/06/litestar/
310•todsacerdoti•18h ago•79 comments
Open in hackernews

Leonardo Chiariglione: “I closed MPEG on 2 June 2020”

https://leonardo.chiariglione.org/
154•eggspurt•3h ago

Comments

wheybags•3h ago
As someone who hasn't had any exposure to the human stories behind mpeg before, it feels to me like it's been a force for evil since long before 2020. Patents on h264, h265, and even mp3 have been holding the industry back for decades. Imagine what we might have if their iron grip on codecs was broken.
jbverschoor•3h ago
Enough codecs out there. Just no adoption.
wheybags•3h ago
Yes, because mpeg got there first, and now their dominance is baked into silicon with hardware acceleration. It's starting to change at last but we have a long way to go. That way would be a lot easier if their patent portfolio just died.
egeozcan•3h ago
This might be an oversimplification, but as a consumer, I think I see a catch-22 for new codecs. Companies need a big incentive to invest in them, which means the codec has to be technically superior and safe from hidden patent claims. But the only way to know if it's safe is for it to be widely used for a long time. Of course, it can't get widely used without company support in the first place. So, while everyone waits, the technology is no longer superior, and the whole thing fizzles out.
Taek•2h ago
Companies only need a big incentive to invest in new codecs because creating a codec that has a simple incremental improvement would violate existing patents.
jbverschoor•2h ago
Jxl has been around for years.

Av1 for 7

The problem is every platform wants to force their own codec, and get earn royalties from the rest of the world.

They literally sabotaging it. Jxl support even got removed from chrome.

Investment in adopting in software is next to 0.

In hardware it’s a different story, and I’m not sure to what extent which codec can be properly accelerated

TiredOfLife•1h ago
Because every codec has 3+ different patent pools wanting rent. Each with different terms.
rs186•1h ago
Not all codecs are equal, and to be honest, most are probably not optimized/suitable for today's applications, otherwise Google wouldn't have invented their own codec (which then gets adopted widely, fortunately).
mike_hearn•3h ago
Possibly, nothing. Codec development is slow and expensive. Free codecs only came along at all because Google decided to subsidize development but that became possible only 15 years or so after MPEG was born, and it's hardly a robust strategy. Plus free codecs were often built by acquiring companies that had previously been using IP licensing as a business model rather than from-scratch development.
wheybags•3h ago
It's not just about new codecs. There's also people making products that would use codecs just deciding not to because of the patent hassle.
newsclues•3h ago
This is the sort of project that should be developed and released via open source from academia.

Audio and video codecs, document formats like PDF, are all foundational to computing and modern life from government to business, so there is a great incentive to make it all open, and free.

oblio•2h ago
You're also describing technologies with universal use and potential for long term rent seeking.

Basically MBA drool material.

newsclues•27m ago
Yeah, and if MBAs want to reap that reward, they need to fund the development exclusively without government funding.
mike_hearn•2h ago
Universities love patent licensing. I don't think academia is the solution you're looking for.
yxhuvud•2h ago
The solution to that is to remove the ability to patent codecs.
master-lincoln•26m ago
I think we should go a step further and remove the ability to patent algorithms (software)
newsclues•31m ago
So do companies.

But education receives a lot of funding from the government.

I think academia should build open source technology (that people can commercialize on their own with the expertise).

Higher education doesn’t need to have massive endowments of real estate and patent portfolio to further educ… administration salaries and vanity building projects.

Academia can serve the world with technology and educated minds.

thinkingQueen•3h ago
Not sure why you are downvoted as you seem to be one of the few who knows even a little about codec development.

And regarding ”royalty-free” codecs please read this https://ipeurope.org/blog/royalty-free-standards-are-not-fre...

bjoli•2h ago
At least two of the members of ipeurope are companies you could use as ann argument why we shouldn't have patents at all.
blendergeek•2h ago
> And regarding ”royalty-free” codecs please read this https://ipeurope.org/blog/royalty-free-standards-are-not-fre...

Unsurprisingly companies that are losing money because their rent-seeking on media codecs is now over will spread FUD [0] about royalty free codecs.

[0] https://en.wikipedia.org/wiki/Fear%2C_uncertainty_and_doubt

chrismorgan•2h ago
That article is a scare piece designed to spread fear, uncertainty and doubt, to prop up an industry that has already collapsed because everyone else hated them, and make out that they’re the good guys and you should go back to how things were.
cnst•42m ago
> The catch is that while the AV1 developers offer their patents (assuming they have any) on a royalty-free basis, in return they require users of AV1 to agree to license their own patents royalty-free back to them.

Such a huge catch that the companies that offer you a royalty-free license, only do so on the condition that you're not gonna turn around and abuse your own patents against them!

How exactly is that a bad thing?

How is it different from the (unwritten) social contracts of all humans and even of animals? How is it different from the primal instincts?

pornel•2h ago
IP law, especially defence against submarine patents, makes codec development expensive.

In the early days of MPEG codec development was difficult, because most computers weren't capable of encoding video, and the field was in its infancy.

However, by the end of '00s computers were fast enough for anybody to do video encoding R&D, and there was a ton of research to build upon. At that point MPEG's role changed from being a pioneer in the field to being an incumbent with a patent minefield, stopping others from moving the field forward.

mike_hearn•2h ago
IP law and the need for extremely smart people with a rare set of narrow skills. It's not like codec development magically happens for free if you ignore patents.

The point is, if there had been no incentives to develop codecs, there would have been no MPEG. Other people would have stepped into the void and sometimes did, e.g. RealVideo, but without legal IP protection the codecs would just have been entirely undocumented and heavily obfuscated, relying on tamper-proofed ASICs much faster.

badsectoracula•1h ago
That sounds like the 90s argument against FLOSS: without the incentive for people to sell software, nobody would write it.
strogonoff•1h ago
Without IP protections that allow copyleft to exist arguably there would be no FOSS. When anything you publish can be leveraged and expropriated by Microsoft et al. without them being obligated to contribute back or even credit you, you are just an unpaid ghost engineer for big tech.
tsimionescu•1h ago
This is still the argument for software copyright. And I think it's still a pretty persuasive argument, despite the success of FLOSS. To this day, there is very little successful consumer software. Outside of browsers, Ubuntu, Libre Office, and GIMP are more or less it, at least outside certain niches. And even they are a pretty tiny compared to Windows/MacOS/iOS/Android, Office/Google Docs, or Photoshop.

The browsers are an interesting case. Neither Chrome nor Edge are really open source, despite Chromium being so, and they are both funded by advertising and marketing money from huge corporations. Safari is of course closed source. And Firefox is an increasingly tiny runner-up. So I don't know if I'd really count Chromium as a FLOSS success story.

Overall, I don't think FLOSS has had the kind of effect that many activists were going for. What has generally happened is that companies building software have realized that there is a lot of value to be found in treating FLOSS software as a kind of barter agreement between companies, where maybe Microsoft helps improve Linux for the benefit of all, but in turn it gets to use, say, Google's efforts on Chromium, and so on. The fact that other companies then get to mooch off of these big collaborations doesn't really matter compared to getting rid of the hassle of actually setting up explicit agreements with so many others.

_alternator_•19m ago
The value of OSS is estimated at about $9 trillion dollars. That’s more valuable than any company on earth.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4693148

sitkack•12m ago
> don't think FLOSS has had the kind of effect that many activists were going for

The entire internet, end to end, runs on FLOSS.

zozbot234•1h ago
Software wasn't always covered by copyright, and people wrote it all the same. In fact they even sold it, just built-to-order as opposed to any kind of retail mass market. (Technically, there was no mass market for computers back then so that goes without saying.)
sitkack•13m ago
You continue to make the same unsubstantiated claims about codecs being hard and expensive. These same tropes were said about every other field, and even if true, we have tens of thousands of folks that would like to participate, but are locked out due to broken IP law.

The firewall of patents exist precisely because digital video is a way to shakedown the route media would have to travel to get to the end user.

Codecs are not, "harder than" compilers, yet the field of compilers was blown completely open by GCC. Capital didn't see the market opportunity because there wasn't the same possibility of being a gatekeeper for so much attention and money.

The patents aren't because it is difficult, the patents are there because they can extract money from the revenue streams.

cornholio•2h ago
That's unnecessarily harsh. Patent pools exist to promote collaboration in a world with aggressive IP legislation, they are an answer to a specific environment and they incentivize participants to share their IP at a reasonable price to third parties. The incentive being that you will be left out of the pool, the other members will work around your patents while not licensing their own patents to you, so your own IP is now worthless since you can't work around theirs.

As long as IP law continues in the same form, the alternative to that is completely closed agreements among major companies that will push their own proprietary formats and aggressively enforce their patents.

The fair world where everyone is free to create a new thing, improve upon the frontier codecs, and get a fair reward for their efforts, is simply a fantasy without patent law reform. In the current geopolitical climate, it's very very unlikely for nations where these developments traditionally happened, such as US and western Europe, to weaken their IP laws.

ZeroGravitas•1h ago
They actually messed up the basic concept of a patent pool, and that is the key to their failure.

They didn't get people to agree on terms up front, they made the final codec with interlocking patents embedded from hundreds of parties and made no attempt to avoid random outsider's patents and then once it was done tried to come to a licence agreement when every minor patent holder had an effective veto on the resulting pool. That's how you end up with multiple pools plus people who own patents and aren't members of any of the pools. It's ridiculous.

My minor conspiracy theory is that if you did it right, then you'd basically end up with something close to open source codecs as that's the best overall outcome.

Everyone benefits from only putting in freely available ideas. So if you want to gouge people with your patents you need to mess this up and "accidentally" create a patent mess.

scotty79•1h ago
Patent pools exist to make infeasible system look not so infeasible so people won't recoginize how it's stifling innovation and abolish it.
phkahler•27m ago
>> That's unnecessarily harsh. Patent pools exist to promote collaboration in a world with aggressive IP legislation, they are an answer to a specific environment and they incentivize participants to share their IP at a reasonable price to third parties.

You can say that, but this discussion is in response to the guy who started MPEG and later shut it down. I don't think he'd say its harsh.

Taek•2h ago
I avoided a career in codecs after spending about a year in college learning about them. The patent minefield meant I couldn't meaningfully build incremental improvements on what existed, and the idea of dilligently dancing around existing patents and then releasing something which intentionally lacked state-of-the-art ideas wasn't compelling.

Codec development is slow and expensive becuase you can't just release a new codec, you have to dance around patents.

mike_hearn•2h ago
Well, a career in codec development means you'd have done it as a job, and so you'd have been angling for a job at the kind of places that enter into the patent pools and contribute to the standards.
astrange•1h ago
Software patents aren't an issue in much of the world; the reason I thought there wasn't much of a career in codec development was that it was obvious that it needed to move down into custom ASICs to be power-efficient, at which point you can no longer develop new ones until people replace all their hardware.
rowanG077•54m ago
Software patents aren't an issue in most of the world. Codecs however are used all over the world. No one is going to use a codec that is illegal to use in the US and EU.
deadbabe•1h ago
Why not just use AI?
ghm2199•1h ago
For the uninitiated, could you describe why codec development is slow and expensive?
thinkingQueen•1h ago
It’s a bit like developing an F1 car. Or a cutting edge airplane. Lots of small optimizations that have to work together. Sometimes big new ideas emerge but those are rare.

Until the new codec comes to together all those small optimizations aren’t really worth much, so it’s a long term research project with potentially zero return on investement.

And yes, most of the small optimizations are patented, something that I’ve come to understand isnt’t viewed very favorably by most.

phkahler•13m ago
>> And yes, most of the small optimizations are patented, something that I’ve come to understand isn’t viewed very favorably by most.

Codecs are like infrastructure not products. From cameras to servers to iPhones, they all have to use the same codecs to interoperate. If someone comes along with a small optimization it's hard enough to deploy that across the industry. If it's patented you've got another obstacle: nobody wants to pay the incremental cost for a small improvement (it's not even incremental cost once you've got free codecs, it's a complete hassle).

bsindicatr3•1h ago
> Free codecs only came along … and it's hardly a robust strategy

Maybe you don’t remember the way that the gif format (there was no jpeg, png, or webp initially) had problems with licensing, and then years later having scares about it potentially becoming illegal to use gifs. Here’s a mention of some of the problems with Unisys, though I didn’t find info about these scares on Wikipedia’s GIF or Compuserve pages:

https://www.quora.com/Is-it-true-that-in-1994-the-company-wh...

Similarly, the awful history of digital content restriction technology in-general (DRM, etc.). I’m not against companies trying to protect assets, but data assets historically over all time are inherently prone to “use”, whether that use is intentional or unintentional by the one that provided the data. The problem has always been about the means of dissemination, not that the data itself needed to be encoded with a lock that anyone with the key or means to get/make one could unlock nor that it should need to call home, basically preventing the user from actually legitimately being able to use the data.

adzm•1h ago
> I didn’t find info about these scares on Wikipedia’s GIF or Compuserve pages

The GIF page on wikipedia has an entire section for the patent troubles https://en.wikipedia.org/wiki/GIF#Unisys_and_LZW_patent_enfo...

tomrod•1h ago
Free codecs have been available a long time, surely, as we could install them in Linux distributions in 2005 or earlier?

(I know nothing about the legal side of all this, just remembering the time period of Ubuntu circa 2005-2008).

zappb•1h ago
Free codecs without patent issues were limited to things like Vorbis which never got wide support. There were FOSS codecs for patented algorithms, but those had legal issues in places that enforce software patents.
notpushkin•1h ago
> which never got wide support

Source? I’ve seen Vorbis used in a whole bunch of places.

Notably, Spotify only used Vorbis for a while (still does, but also includes AAC now, for Apple platforms I think).

scott_w•5m ago
Pre-Spotify, MP3 players would usually only ship with MP3 support (thus the name), so people would only rip to MP3. Ask any millennial and most of them will never have heard of Ogg.
breve•6m ago
AV1, VP9, and Opus are used on YouTube and Netflix right now.

It's hard to get more mainstream than YouTube and Netflix.

lightedman•1h ago
"Free codecs only came along at all because Google decided to subsidize development"

No, just no. We've had free community codec packs for years before Google even existed. Anyone remember CCCP?

notpushkin•47m ago
Yes. Those won’t help you if you use them for commercial use and patent holders find out about it.
leguminous•43m ago
CCCP was just a collection of existing codecs, they didn't develop their own. Most of the codecs in CCCP were patented. Using it without licenses was technically patent infringement in most places. It's just that nobody ever cared to enforce it on individual end users.
cxr•25m ago
> Free codecs only came along at all because Google decided to subsidize development but that became possible only 15 years or so after MPEG was born

The release of VP3 as open source predates Google's later acquisition of On2 (2010) by nearly a decade.

zoeysmithe•13m ago
This is impossible to know. Not that long ago something like Linux would have sounded like a madman's dream to someone with your perspective. It turns out great innovations happen outside the capitalist for-profit context and denying that is very questionable. If anything, those kinds of setups often hinder innovation. How much better would linux be if it was mired in endless licensing agreements, per monthly rates, had a board full of fortune 500 types, and billed each user a patent fee? Or any form of profit incentive 'business logic'?

If that stuff worked better, linux would have failed entirely, instead near everyone interfaces with a linux machine probably hundreds if not thousands of times a day in some form. Maybe millions if we consider how complex just accessing internet services is and the many servers, routers, mirrors, proxies, etc one encounters in just a trivial app refresh. If not linux, then the open mach/bsd derivatives ios uses.

Then looking even previous to the ascent of linux, we had all manner of free/open stuff informally in the 70s and 80s. Shareware, open culture, etc that led to today where this entire medium only exists because of open standards and open source and volunteering.

Software patents are net loss for society. For profit systems are less efficient than open non-profit systems. No 'middle-man' system is better than a system that goes out of its way to eliminate the middle-man rent-seeker.

thinkingQueen•3h ago
Who would develop those codecs? A good video coding engineer costs about 100-300k USD a year. The really good ones even more. You need a lot of them. JVET has an attendance of about 350 such engineers each meeting (four times a year).

Not to mention the computer clusters to run all the coding sims, thousands and thousands of CPUs are needed per research team.

People who are outside the video coding industry do not understand that it is an industry. It’s run by big companies with large R&D budgets. It’s like saying ”where would we be with AI if Google, OpenAI and Nvidia didn’t have an iron grip”.

MPEG and especially JVET are doing just fine. The same companies and engineers who worked on AVC, HEVC and VVC are still there with many new ones especially from Asia.

MPEG was reorganized because this Leonardo guy became an obstacle, and he’s been angry about ever since. Other than that I’d say business as usual in the video coding realm.

roenxi•3h ago
> It’s like saying ”where would we be with AI if Google, OpenAI and Nvidia didn’t have an iron grip”.

We'd be where we are. All the codec-equivalent aspects of their work are unencumbered by patents and there are very high quality free models available in the market that are just given away. If the multimedia world had followed the Google example it'd be quite hard to complain about the codecs.

thinkingQueen•2h ago
That’s hardly true. Nvidia’s tech is covered by patents and licenses. Why else would it be worth 4.5 trillion dollars?

The top AI companies use very restrictive licenses.

I think it’s actually the other way around and AI industry will actually end up following the video coding industry when it comes to patents, royalties, licenses etc.

roenxi•2h ago
Because they make and sell a lot of hardware. I'm sure they do have a lot of patents and licences, but if all that disappeared today it'd be years to decades before anyone could compete with them. Even just getting a foot in the door in TSMC's queue of customers would be hard. Their valuation can likely be justified based on their manufacturing position alone. There is literally no-one else who can do what they do, law or otherwise.

If it is a matter of laws, China would just declare the law doesn't count to dodge around the US chip sanctions. Which, admittedly, might happen - but I don't see how that could result in much more freedom than we already have now. Having more Chinese people involved is generally good for prices, but that doesn't have much to do with market structure as much as they work hard and do things at scale.

> The top AI companies use very restrictive licenses.

These models are supported by the Apache 2.0 license ~ https://openai.com/open-models/

Are they lying to me? It is hard to get much more permissive than Apache 2.

mike_hearn•2h ago
The top AI companies don't release their best models under any license. They're not even distributed at all. If you did steal the weights out from underneath Anthropic they would take you to court and probably win. Putting software you develop exclusively behind a network interface is a form of ultra-restrictive DRM. Yes, some places are currently trying to buy mindshare by releasing free models and that's fantastic, thank you, but they can only do that because investors believe the ROI from proprietary firewalled models will more than fund it.

NVIDIA's advantage over AMD is largely in the drivers and CUDA i.e. their software. If it weren't for IP law or if NVIDIA had foolishly made their software fully open source, AMD could have just forked their PTX compiler and NVIDIAs advantage would never have been established. In turn that'd have meant they wouldn't have any special privileges at TSMC.

oblio•2h ago
I imagine a chunk of it is also covered by trade secrets and NDAs.
rwmj•3h ago
Who would write a web server? Who would write Curl? Who would write a whole operating system to compete with Microsoft when that would take thousands of engineers being paid $100,000s per year? People don't understand that these companies have huge R&D budgets!

(The answer is that most of the work would be done by companies who have an interest in video distribution - eg. Google - but don't profit directly by selling codecs. And universities for the more research side of things. Plus volunteers gluing it all together into the final system.)

thinkingQueen•2h ago
Are you really saying that patents are preventing people from writing the next great video codec? If it were that simple, it would’ve already happened. We’re not talking about a software project that you can just hack together, compile, and see if it works. We’re talking about rigorous performance and complexity evaluations, subjective testing, and massive coordination with hardware manufacturers—from chips to displays.

People don’t develop video codecs for fun like they do with software. And the reason is that it’s almost impossible to do without support from the industry.

eqvinox•2h ago
> Are you really saying that patents are preventing people from writing the next great video codec? If it were that simple, it would’ve already happened.

You wouldn't know if it had already happened, since such a codec would have little chance of success, possibly not even publication. Your proposition is really unprovable in either direction due to the circular feedback on itself.

bayindirh•2h ago
> People don’t develop video codecs for fun like they do with software. And the reason is that it’s almost impossible to do without support from the industry.

Hmm, let me check my notes:

    - Quite OK Image format: https://qoiformat.org/
    - Quite OK Audio format: https://qoaformat.org/
    - LAME (ain't a MP3 Encoder): https://lame.sourceforge.io/
    - Xiph family of codecs: https://xiph.org/
Some of these guys have standards bodies as supporters, but in all cases, bigger groups formed behind them, after they made considerable effort. QOI and QOA is written by a single guy just because he's bored.

For example, FLAC is a worst of all worlds codec for industry to back. A streamable, seekable, hardware-implementable, error-resistant, lossless codec with 8 channels, 32 bit samples, and up to 640KHz sample rate, with no DRM support. Yet we have it, and it rules consumer lossless audio while giggling and waving at everyone.

On the other hand, we have LAME. An encoder which also uses psycho-acoustic techniques to improve the resulting sound quality and almost everyone is using it, because the closed source encoders generally sound lamer than LAME in the same bit-rates. Remember, MP3 format doesn't have an reference encoder. If the decoder can read the file and it sounds the way you expect, then you have a valid encoder. There's no spec for that.

> Are you really saying that patents are preventing people from writing the next great video codec?

Yes, yes, and, yes. MPEG and similar groups openly threatened free and open codecs by opening "patent portfolio forming calls" to create portfolios to fight with these codecs, because they are terrified of being deprived of their monies.

If patents and license fees are not a problem for these guys, can you tell me why all professional camera gear which can take videos only come with "personal, non-profit and non-professional" licenses on board, and you have pay blanket extort ^H^H^H^H^H licensing fees to these bodies to take a video you can monetize?

For the license disclaimers in camera manuals, see [0].

[0]: https://news.ycombinator.com/item?id=42736254

Taek•2h ago
People don't develop video codecs for fun because there are patent minefields.

You don't *have* to add all the rigour. If you develop a new technique for video compression, a new container for holding data, etc, you can just try it out and share it with the technical community.

Well, you could, if you weren't afraid of getting sued for infringing on patents.

fires10•2h ago
I don't do video because I don't work with it, but I do image compression for fun and no profit. I do use some video techniques due to the type of images I am compressing. I don't release because of the minefield. I do it because it's fun. The simulation runs and other tasks often I kick to the cloud for the larger compute needs.
unlord•2h ago
> People don’t develop video codecs for fun like they do with software. And the reason is that it’s almost impossible to do without support from the industry.

As someone who lead an open source team (of majority volunteers) for nearly a decade at Mozilla, I can tell you that people do work on video codecs for fun, see https://github.com/xiph/daala

Working with fine people from Xiph.Org and the IETF (and later AOM) on royalty free formats Theora, Opus, Daala and AV1 was by far the most fun, interesting and fulfilling work I've had as professional engineer.

tux3•1h ago
Daala had some really good ideas, I only understand the coding tools at the level of a curious codec enthusiast, far from an expert, but it was really fascinating to follow its progress

Actually, are Xiph people still involved in AVM? It seems like it's being developed a little bit differently than AV1. I might have lost track a bit.

scott_w•2h ago
> Are you really saying that patents are preventing people from writing the next great video codec?

Yes, that’s exactly what people are saying.

People are also saying that companies aren’t writing video codecs.

In both cases, they can be sued for patent infringement if they do.

raverbashing•2h ago
These are bad comparisons

The question is more, "who would write the HTTP spec?" except instead of sending text back and forth you need experts in compression, visual perception, video formats, etc

rwmj•1h ago
Did TBL need to patent the HTTP spec?
mike_hearn•2h ago
Google funding free stuff is not a real social mechanism. It's not something you can point to and say that's how society should work in general.

Our industry has come to take Google's enormous corporate generosity for granted, but there was zero need for it to be as helpful to open computing as it has been. It would have been just as successful with YouTube if Chrome was entirely closed source and they paid for video codec licensing, or if they developed entirely closed codecs just for their own use. In fact nearly all Google's codebase is closed source and it hasn't held them back at all.

Google did give a lot away though, and for that we should be very grateful. They not only released a ton of useful code and algorithms for free, they also inspired a culture where other companies also do that sometimes (e.g. Llama). But we should also recognize that relying on the benevolence of 2-3 idealistic billionaires with a browser fetish is a very time and place specific one-off, it's not a thing that can be demanded or generalized.

In general, R&D is costly and requires incentives. Patent pools aren't perfect, but they do work well enough to always be defining the state-of-the-art and establish global standards too (digital TV, DVDs, streaming.... all patent pool based mechanisms).

breve•8m ago
> Google funding free stuff is not a real social mechanism.

It's not a social mechanism. And it's not generosity.

Google pushes huge amounts of video and audio through YouTube. It's in Google's direct financial interest to have better video and audio codecs implemented and deployed in as many browsers and devices as possible. It reduces Google's costs.

Royalty-free video and audio codecs makes that implementation and deployment more likely in more places.

> Patent pools aren't perfect

They are a long way from perfect. Patent pools will contact you and say, "That's a nice codec you've got there. It'd be a shame if something happened to it."

Three different patent pools are trying to collect licencing fees for AV1:

https://www.sisvel.com/licensing-programmes/audio-and-video-...

https://accessadvance.com/licensing-programs/vdp-pool/

https://www.avanci.com/video/

mschuster91•2h ago
> Who would develop those codecs? A good video coding engineer costs about 100-300k USD a year. The really good ones even more. You need a lot of them.

How about governments? Radar, Laser, Microwaves - all offshoots of US military R&D.

There's nothing stopping either the US or European governments from stepping up and funding academic progress again.

rs186•1h ago
Yeah, counting on governments to develop codecs optimized for fast evolving applications for web and live streaming is a great idea.

If we did that we would probably be stuck with low-bitrate 720p videos on YouTube.

somethingsome•49m ago
Hey, I attend MPEG regularly (mostly lvc lately), there's a chance we’ve crossed paths!
fidotron•1h ago
The fact h264 and h265 are known by those terms is key to the other part of the equation: the ITU Video Coding Experts Group has become the dominant forum for setting standards going back to at least 2005.
Reason077•36m ago
> "Patents on h264, h265, and even mp3 have been holding the industry back for decades. Imagine what we might have if their iron grip on codecs was broken."

Has AV1 solved this, to some extent? Although there are patent claims against it (patents for technologies that are fundamental to all the modern video codecs), it still seems better than the patent & licensing situation for h264 / h265.

dostick•3h ago
The article does not give much beyond what you already read in the title. What obscure forces and how? Isn’t it an open standards non-profit organisation, then what could possible hinder it? Maybe because technologically closed standards became better and nonprofit project has no resources to compete with commercial standards? USB Alliance have been able to work things out, so maybe compression standards should be developed in similar way?
baobun•2h ago
Supposedly the whole story is told in their linked book.
eggspurt•2h ago
From Leonardo, who founded MPEG, on the page linked: "Even before it has ceased to exists, the MPEG engine had run out of steam – technology- and business wise. The same obscure forces that have hijacked MPEG had kept it hostage to their interests impeding its technical development and keeping it locked to outmoded Intellectual Property licensing models delaying market adoption of MPEG standards. Industry has been strangled and consumers have been deprived of the benefits of new technologies. From facilitators of new opportunities and experiences, MPEG standards have morphed from into roadblocks."
karel-3d•2h ago
I... don't understand how AI related to video codecs. Maybe because I don't understand either video codecs or AI on a deeper level.
bjoli•2h ago
It is like upscaling. If you could train AI to "upscale" your audio or video you could get away with sending a lot less data. It is already being done with quite amazing results for audio.
jl6•2h ago
It has long been recognised that the state of the art in data compression has much in common with the state of the art in AI, for example:

http://prize.hutter1.net/

https://bellard.org/nncp/

ddtaylor•2h ago
Some view these as so interconnected that they will say LLMs are "just" compression.
pjc50•49m ago
Which is an interesting view when applied to the IP. I think it's relatively uncontroversial that an MP4 file which "predicts" a Disney movie which it was "trained on" is a derived work. Suppose you have an LLM which was trained on a fairly small set of movies and you could produce any one on demand; would that be treated as a derived work?

If you have a predictor/compressor LLM which was trained on all the movies in the world, would that not also be infringement?

mr_toad•34m ago
MP4s are compressed data, not a compression algorithm. An MP4 (or any compressed data) is not a “prediction”, it is the difference between what was predicted and what you’re trying to compress.

An LLM is (or can be used) as a compression algorithm, but it is not compressed data. It is possible to have an overfit algorithm exactly predict (or reproduce) an output, but it’s not possible for one to reproduce all the outputs due to the pigeonhole principle.

To reiterate - LLMs are not compressed data.

Retr0id•2h ago
AI and data compression are the same problem, rephrased.
oblio•2h ago
Which makes Silicon Valley, the TV show, even funnier.
chisleu•1h ago
holy shit it does. The scene with him inventing the new compression algorithm basically foreshadowed the gooning to follow local LLM availability.
tdullien•2h ago
Every predictor is a compressor, every compressor is a predictor.

If you're interested in this, it's a good idea reading about the Hutter prize (https://en.wikipedia.org/wiki/Hutter_Prize) and going from there.

In general, lossless compression works by predicting the next (letter/token/frame) and then encoding the difference from the prediction in the data stream succinctly. The better you predict, the less you need to encode, the better you compress.

The flip side of this is that all fields of compression have a lot to gain from progress in AI.

dathinab•2h ago
sure, it being a 6 digit code which has potential for social engineering can be an issue

like similar to if you get a "your login" yes/no prompt on a authentication app, but a bit less easy to social engineer but a in turn also suspect to bruteforce attacks (similar to how TOTP is suspect to it)

through on the other hand

- some stuff has so low need of security that it's fine (like configuration site for email news letters or similar where you have to have a mail only based unlock)

- if someone has your email they can do a password reset

- if you replace email code with a login link you some cross device hurdles but fix some of of social enginering vectors (i.e. it's like a password reset on every login)

- you still can combine it with 2FA which if combined with link instead of pin is basically the password reset flow => should be reasonable secure

=> eitherway that login was designed for very low security use cases where you also wouldn't ever bother with 2FA as losing the account doesn't matter, IMHO don't use it for something else :smh:

mschuster91•2h ago
I think you misplaced this comment and it belongs here: https://news.ycombinator.com/item?id=44819917
cpcallen•2h ago
Did you mean to post this comment at https://news.ycombinator.com/item?id=44819917 ?
dathinab•15m ago
yes, that is embarrassing
ZeroGravitas•2h ago
There's nothing obscure about them.

His comment immediately after describes exactly what happened:

> Even before it has ceased to exists, the MPEG engine had run out of steam – technology- and business wise. The same obscure forces that have hijacked MPEG had kept it hostage to their interests impeding its technical development and keeping it locked to outmoded Intellectual Property licensing models delaying market adoption of MPEG standards. Industry has been strangled and consumers have been deprived of the benefits of new technologies. From facilitators of new opportunities and experiences, MPEG standards have morphed from into roadblocks.

Big companies abused the setup that he was responsible for. Gentlemen's agreements to work together for the benefit of all got gamed into patent landmines and it happened under his watch.

Even many of the big corps involved called out the bullshit, notably Steve Jobs refusing to release a new Quicktime till they fixed some of the most egregious parts of AAC licencing way back in 2002.

https://www.zdnet.com/article/apple-shuns-mpeg-4-licensing-t...

scotty79•1h ago
> The same obscure forces that have hijacked MPEG had kept it hostage to their interests impeding its technical development and keeping it locked to outmoded Intellectual Property licensing models delaying market adoption of MPEG standards. Industry has been strangled and consumers have been deprived of the benefits of new technologies.

Copyright is cancer. The faster AI industry is going to run it into the ground, the better.

knome•1h ago
This has nothing to do with copyright. It is an issue of patents.
scotty79•1h ago
I think if IP rights holders were mandated to pay property tax it would make the system much healthier.
londons_explore•1h ago
This. You should have to declare the value of a patent, and pay 1% of that value every year to the government. Anyone else can force-purchase it for that value, but leaving you with a free perpetual license.
LeafItAlone•47m ago
Wouldn’t that only help the “big guys” who can afford to pay the tax?
MyOutfitIsVague•2m ago
Presumably the tax would be based on some estimated value of the property, and affordability would therefore scale.
marcodiego•1h ago
> My Christian Catholic education made and still makes me think that everybody should have a mission that extends beyond their personal interests.

I remember this same guy complaining investments in the MPEG extortionist group would disappear because they couldn't fight against AV1.

He was part of a patent Mafia is is only lamenting he lost power.

Hypocrisy in its finest form.

maxloh•1h ago
Any link to his comment?
marcodiego•52m ago
> all the investments (collectively hundreds of millions USD) made by the industry for the new video codec will go up in smoke and AOM’s royalty free model will spread to other business segments as well.

https://blog.chiariglione.org/a-crisis-the-causes-and-a-solu...

He is not a coder, not a researcher, he is only part of the worst game there is in this industry: a money maker from patents and "standards" you need to pay for to use, implement or claim compatibility.

DragonStrength•39m ago
You missed the first part of that quote:

> At long last everybody realises that the old MPEG business model is now broke

And the entire post is about how dysfunctional MPEG is and how AOM rose to deal with it. It is tragic to waste so much time and money only to produce nothing. He's criticizing the MPEG group and their infighting. He's literally criticizing MPEG's licensing model and the leadership of the companies in MPEG. He's an MPEG member saying MPEG's business model is broken yet no one has a desire to fix it, so it will be beaten by a competitor. Would you not want to see your own organization reform rather than die?

Reminder AOM is a bunch of megacorps with profit motive too, which is why he thinks this ultimately leads to stalled innovation:

> My concerns are at a different level and have to do with the way industry at large will be able to access innovation. AOM will certainly give much needed stability to the video codec market but this will come at the cost of reduced if not entirely halted technical progress. There will simply be no incentive for companies to develop new video compression technologies, at very significant cost because of the sophistication of the field, knowing that their assets will be thankfully – and nothing more – accepted and used by AOM in their video codecs.

> Companies will slash their video compression technology investments, thousands of jobs will go and millions of USD of funding to universities will be cut. A successful “access technology at no cost” model will spread to other fields.

Money is the motivator. Figuring out how to reward investment in pushing the technology forward is his concern. It sounds like he is open to suggestions.

marcodiego•29m ago
Fixing a business model that was always a force that slowed down development, implementation and adoption is not something that should be "fixed". MPEG dying is something to celebrate not whine about.
cnst•27m ago
His argument is blatantly invalid.

He first points out that a royalty-free format was actually better than the patent-pending alternative that he was responsible for pushing.

In the end, he concludes that the that the progress of video compression would stop if developers can't make money from patents, providing a comparison table on codec improvements that conveniently omits the aforementioned royalty-free code being better than the commercial alternatives pushed by his group.

Besides the above fallacy, the article is simply full of boasting about his own self-importance and religious connotations.

selvan•54m ago
May be, we are couple of years away from experiencing patent free video codecs based on deep learning.

DCVC-RT (https://github.com/microsoft/DCVC) - A deep learning based video codec claims to deliver 21% more compression than h266.

One of the compelling edge AI usecases is to create deep learning based audio/video codecs on consumer hardwares.

One of the large/enterprise AI usecases is to create a coding model that generates deep learning based audio/video codecs for consumer hardwares.

_bent•43m ago
https://mpai.community/standards/mpai-spg

This makes zero sense, right? Even if this was applicable, why would it need a standard? There is no interoperability between game servers of different games