1. Does an AI "reading" source code that has been otherwise lawfully obtained infringe copyright? Is this even enforceable?
2. Why write a new license rather than just adding a rider to the AGPL? This is missing language the AGPL uses to cover usage (rather than just copying) of software.
To the extent that this has been decided under US law, no. AI training on legally acquired material has been deemed fair use.
The operative words are the last four there. GPL, and all other software licenses (copyleft or not), can only bind you as strongly as the underlying copyright law. They are providing a copyright license that grants the licensee favorable terms, but it's still fundamentally the same framework. Anything which is fair use under copyright is also going to be fair use under the GPL (and LLMs are probably transformative enough to be fair use, though that remains to be seen.)
Arguably, at least in the US, it has been seen. Unless someone comes up with a novel argument not already advanced in the Anthropic case about why training an AI on otherwise legally acquired material is not transformative enough to be fair use, I don't see how you could read the ruling any other way.
Honestly with how much focus there tends to be on *GPL in these discussions, I get the feeling that MIT style licenses tend to be the most frequently violated, because people treat it as public domain.
Demand better protections. Demand better pay.
Demand your rights. Demand accountability for oppressors.
Real "let me speak to your manager" activism. You have to have been sheltered in a really extreme way not only to say things like this, but to listen to it without laughing.
Here's some unrequested advice: the way to make simple people follow you is to make them feel like leaders among people they feel superior to, and to make them feel like rebels among people they feel inferior to. Keep this in mind and introspect when you find yourself mindlessly sloganeering.
Unsure who you are addressing, but clearly its someone other than me.
Did you see where the OP implied that any activism is useless? Got any harsh words for that philosophy?
The LLMs are harmful to the business of creating software. Full stop. Either we can do something about it (like expose the futility of licensing in general), or we can just die.
While I think this licensing effort is likely to be ignored, I applaud it and hope more things like this continue to be created. The silicon valley VC hose is truly evil.
> Any modified versions, derivative works, or software that incorporates any portion of this Software must be released under this same license (HOPL) or a compatible license that maintains equivalent or stronger human-only restrictions.
That’s not what copyleft means, that’s just a share-alike provision. A copyleft provision would require you to share the source-code, which would be beautiful, but it looks like the author misunderstood…
> A copyleft provision would require you to share the source-code, which would be beautiful, but it looks like the author misunderstood…
This license doesn't require the original author to provide source code in the first place. But then, neither does MIT, AFAICT.
But also AFAICT, this is not even a conforming open-source license, and the author's goals are incompatible.
> ...by natural human persons exercising meaningful creative judgment and control, without the involvement of artificial intelligence systems, machine learning models, or autonomous agents at any point in the chain of use.
> Specifically prohibited uses include, but are not limited to: ...
From the OSI definition:
> 6. No Discrimination Against Fields of Endeavor
> The license must not restrict anyone from making use of the program in a specific field of endeavor. For example, it may not restrict the program from being used in a business, or from being used for genetic research.
Linux distros aren't going to package things like this because it would be a nightmare even for end users trying to run local models for personal use.
Probably rules out any modern IDE's autocomplete.
Honestly with the wording 'chain of use', even editing the code in vim but using chatgpt for some other part of project could be argued as part of the 'chain of use'.
But EU jurisdictions? I'm quite curious where this will go. Europe is much more keen to protect natural persons rights against corporate interests in the digital sphere, particularly since it has much less to lose, since EU digital economy is much weaker.
I could imagine ECJ ruling on something like this quite positively.
How strongly is that? Would it really be that catastrophic to return all business processes to as they were in, say, 2022?
Yeah, imagine shutting down all the basic research that has driven the economy for the last 75 years, in a matter of months. Crazy. Nobody would do that.
And what about jobs lost (or never created) due to AI itself?
Would not Google/Amazon/Meta have continued on to advance their product lines and make new products, even if not AI? Would not other new non-AI companies have been created?
I'm not convinced that the two options are, "everything as it is right now", or, "the entire economy is collapsed".
https://www.theatlantic.com/economy/archive/2025/09/ai-bubbl...
https://www.sbs.com.au/news/article/government-rules-out-cha...
If I'm reading this and the license text correctly, it assumes the AI as a principal in itself, but to the best of my knowledge, AI is not considered by any regulation as a principal, and rather only as a tool controlled by a human principal.
Is it trying to prepare for a future in which AIs are legal persons?
EDIT: Looking at it some more, I can't but feel that it's really racist. Obviously if it were phrased with an ethnic group instead of AI, it would be deemed illegally discriminating. And I'm thinking that if and when AI (or cyborgs?) are considered legal persons, we'd likely have some anti-discrimination regulation for them, which would make this license illegal.
Humans are awful.
But as an example, we know of animals which show genuine emotion be treated so cruel-ly just because they are of a specific specie/(race, if you can consider AI/LLM to be a race then animals sure as well count when we can even share 99% of our dna)
But animals aren't treated that way unless the laws of a constitution created a ban against cruelty to animals
So it is our constitution which is just a shared notion of understanding / agreement between people and some fictional construct which then has meaning via checks and balances and these fictional constructs become part of a larger construct (UN) to try to create a baseline of rights
So the only thing that could happen is a violation of UN rights as an example but they are only enforcable if people at scale genuinely believe in the message or the notion that the violation of UN rights by one person causing harm to another person is an ethically immoral decision and should be punished if we as a society don't want to tolerate intolerance (I really love bringing up that paradox)
I am genuinely feeling like this comment and my response to it should be cemented in posterity because of something that I am going to share, I want everybody to read it if possible because of what I am about to just say
>if you're assuming that we are considered sentient enough to gain personhood. And your first reaction to that is to restrict our rights?
What is sentience to you? Is it the ability to feel pain or is the ability to write words?
Since animals DO feel pain and we RESTRICT their RIGHTS yet you/many others are willing to fight for rights of something that doesn't feel pain but just is nothing but a mere calculation/linear alegbra really, just one which is really long with lots of variables/weights which are generated by one set of people taking/"stealing" work of other people who they have (generally speaking) no rights over.
Why are we not thinking of animals first before thinking about a computation? The ones which actually feel pain and the ones who are feeling pain right as me and you speak and others watch
Just because society makes it socially acceptable,constitution makes it legal. Both are shared constructs that happen when we try to box people together in what is known as a society and this is our attempt at generating order out of randomness
> Humans are awful.
I genuinely feel like this might be the statement that people might bring when talking about how we used to devour animals who suffer in pain when there were vegetarian based options.
I once again recommend Joaquin Phoenix narrated documentary whose name is earthlings here https://www.youtube.com/watch?v=8gqwpfEcBjI
People from future might compare our treatment of animals in the same way we treat negatively some part of our ancestor's society (slavery)
If I am being too agitated on this issue and this annoys any non vegetarian, please, I understand your situation too, in fact I am sympathesize with you, I was born into a society / a nation's/states part which valued vegetarianism and I conformed in that and you might have conformed being a non veg due to society as well or you might have some genuine reasons as well but still, I just want to share that watching that documentary is the best way you can educate yourselfs on the atrocities done indirectly caused by our ignorance or maybe willfully looking away from this matter. This is uncomfortable but this is reality.
As I said a lot of times, societies are just a shared construct of people's beliefs really, I feel like in an ideal world, we will have the evolution of ideas where we have random mutations in ideas and see which survives via logic and then adopt it into the society. Yet, someone has to spread the word of idea or in this case, show uncomfort. Yet this is the only thing that we can do in our society if one truly believes in logic. I feel like that there are both logical and moral arguments regarding veganism. I feel like that people breaking what conformity of the society means in the spirit of what they believe in could re-transform what the conforming belief of the overall society is.
if someone just wants to have a talk about it or discuss about the documentary and watched it, please let me know how you liked that movie and how it impacted you and as always, have a nice day.
> I am not a legal expert, so if you are, I would welcome your suggestions for improvements
> I'm a computer engineer based in Brussels, with a background in computer graphics, webtech and AI
Particularly when they've already established they don't care about infringing standard copyright
Supposing the software I downloaded is scanned by a virus scanner, which is using AI to detect viruses. Who is in violation? How do you meaningfully even know when I has accessed the software, what happens if it does?
This license also violated the basic Software Freedoms. Why should a user not be allowed to use AI on software?
The basic idea is that the person accessing your content to put it into a model agrees your content is a thing of value and in exchange grants you a license to anything that comes out of the model while your content is incorporated.
For example, suppose your your art is put into a model and then the model makes a major movie. You now have a license to distribute that movie, including for free...
if someone used your art put it into a model and makes the major movie, you now have a license to distribute that movie, including for free...
What about the Model itself though, it is nothing but the weights which are generated via basically transforming the data that was unlawfully obtained or one which actually violated the contract-law
it wasn't the person creating the prompt which generated the movie via the model , it wasn't either the movie or the prompt which violated the contract but the model or the scraping company itself no?
Also you mention any output, that just means that if someone violates your terms of service and lets say that you created a square (for lack of better words) and someone else created a circle
and an ai is trained on both, so it created both square and circle as its output one day
What you say is that then it should give you the right to "use and re-license any output or derivative works created from that trained Generative AI System."
So could I use both square and circle now? Or could I re-license both now? How would this work?
or are you saying just things directly trained or square-alike output would be considered in that sense
So how about a squircle, what happens if the model output a squircle, who owns and can relicences it then?
What if square party wants to re-license it to X but circle party wants to re-license it to Y
Also what about if the AI Company says its free use/derivative work, I am not familiar with contract law or any law for that matter but still, I feel like these things rely an underlying faith in the notion that AI and its training isn't fair work but what are your thoughts/ how does contract law prevent the fair work argument
This is indeed a weak-point in the contract approach: People can't be bound by an contract they never knew about nor agreed-to.
However if they acquired a "stolen" copy of my content, then (IANAL) it might offer some new options over in the copyright-law realm: Is it still "fair use" when my content was acquired without permission? If a hacker stole my manuscript-file for a future book, is it "fair use" for an AI company to train on it?
> it wasn't the person creating the prompt which generated the movie via the model
The contract doesn't limit what the model outputs, so it doesn't matter who to blame for making/using prompts.
However the model-maker still traded with me, taking my stuff and giving me a copyright sub-license for what comes out. The "violation" would be if they said: "Hey, you can't use my output like that."
> So could I use both square and circle now? [...] a squircle
Under contract law, it doesn't matter: We're simply agreeing to exchange things of value, which don't need to be similar.
Imagine a contract where I trade you 2 eggs and you promise me 1 slice of cake. It doesn't matter if you used those eggs in that cake, or in a different cake, or you re-sold the eggs, or dropped the eggs on the floor by accident. You still owe me a slice of cake. Ditto for if I traded you cash, or shiny rocks.
The main reason to emphasize that "my content is embedded in the model" has to do with fairness: A judge can void a contract if it is too crazy ("unconscionable"). Incorporating my content into the their model is an admission that it is valuable, and keeping it there indefinitely justifies my request for an indefinite license.
> What if square party wants to re-license it to X but circle party wants to re-license it to Y
If the model-runner generates X and wants to give square-prompter an exclusive license to the output, then that's a violation of their contract with me, and it might be grounds to force them to expensively re-train their entire model with my content removed.
A non-exclusive license is fine though.
Assuming a standard website without a signup wall, this seems like a legally dubious assertion to me.
At what point did the AI bot accept those terms and conditions, exactly? As a non-natural person, is it even able to accept?
If you're claiming that the natural person responsible for the bot is responsible, at what point did you notify them about your terms and conditions and give them the opportunity to accept or decline?
It's a different situation if the website is gated with an explicit T&C acceptance step, of course.
Licenses have been useful in the narrow niche of extracting software engineering labor from large corporations, mostly in the US. The GPL has done the best job of that, as it has a whole organization dedicated to giving it teeth. Entities outside the US, and especially outside of the West, are less vulnerable to this sort of lawfare.
> Any contract term is void to the extent that it purports, directly or indirectly, to exclude or restrict any permitted use under any provision in
> [...]
> Division 8 (computational data analysis)
I don't know how you can post something publicly on the internet and say, this is for X, Y isn't allowed to view it. I don't think there's any kind of AI crawler that's savvy enough to know that it has to find the license before it ingests a page.
Personally, beyond reasonable copyrights, I don't think anyone has the right to dictate how information is consumed once it is available in an unrestricted way.
At a minimum anything released under HOPL would need a click-through license, and even that might be wishful thinking.
> The 9th Circuit ruled that hiQ had the right to do web scraping.
> However, the Supreme Court, based on its Van Buren v. United States decision, vacated the decision and remanded the case for further review [...] In November 2022 the U.S. District Court for the Northern District of California ruled that hiQ had breached LinkedIn's User Agreement and a settlement agreement was reached between the two parties.
So you can scrape public info, but if there's some "user agreement" you can be expected to have seen, you're maybe in breach of that, but the remedies available to the scrapee don't include "company XYZ must stop scraping me", as that might allow them unfair control over who can access public information.
Open Source licenses give license to the rights held exclusively by the author/copyright-holder: making copies, making derivative works, distribution.
An open source license guarantees others who get the software are able to make copies and derivatives and distribute them under the same terms.
This license seeks to gain additional rights, the right to control who uses the software, and in exchange offers nothing else.
IANAL but I think it needs to be a contract with consideration and evidence of acceptance and all that to gain additional rights. Just printing terms in a Copyright license wont cut it.
How can you have a legitimate copy of software without a license, assuming that the software requires you to have a license? You are simply using circular reasoning.
You can because someone bought a physical copy, and then exercised their rights under the first sale doctrine to resell the physical copy. (With sales on physical media being less common, it’s harder to get a legitimate copy of software without a license then it used to be.)
To the best of my (admittedly limited) knowledge, no court has yet denied the long-standing presumption that, because a program needs to be copied into memory to be used, a license is required.
This is, AFAIK, the basis for non-SaaS software EULAs. If there was no legal barrier to you using software that you had purchased, the company would have no grounds upon which to predicate further restrictions.
This was specifically validated by the 9th Circuit in 1993 (and implicitly endorsed by Congress subsequently adopting a narrow exception for software that is run automatically when turning on a computer, copied into memory in the course of turning on the computer as part of computer repair.)
There is no legal barrier to using a legit copy of software. That is why software companies try to force you to agree to a contract limiting your rights.
Copying is, and copying into memory is inherently necessary to use. (Of course, in some cases, copying may be fair use.)
> If I have a legitimate copy of the software I can use it,
If you can find a method to use it without exercising one of the exclusive rights in copyright, like copying, sure, or if that exercise falls into one of the exceptions to copyright protection like fair use, also sure, otherwise, no.
> Just like I don't need a license to read a book.
You can read a book without copying it.
> Copying is, and copying into memory is inherently necessary to use. (Of course, in some cases, copying may be fair use.)
Has this interpretation actually been upheld by any courts? It feels like a stretch to me.
That copying into RAM, including specifically in the context of running software, is included in the exclusive right of copying reserved to the copyright holder except as licensed by them? Yes, the main case I am familiar with being MAI Systems Corp. v. Peak Computer, Inc., 991 F.2d 511 (9th Cir. 1993) [0]; note that for the specific context of that case (software that is run automatically when activating a computer in the course of maintenance or repair of that computer), Congress adopted a narrow exception after this case , codified at 17 USC § 117(c) [1], but that validates that in the general case, copying into RAM is a use of the exclusive rights in copyright.
[0] https://en.wikipedia.org/wiki/MAI_Systems_Corp._v._Peak_Comp....
> it is not an infringement for the owner of a copy of a computer program to make or authorize the making of another copy or adaptation of that computer program provided:
> (1) that such a new copy or adaptation is created as an essential step in the utilization of the computer program in conjunction with a machine and that it is used in no other manner
i.e. the owner of a copy of a computer program has the right to make more copies if necessary to use it (e.g. copy-to-RAM, copy to CPU cache) as long as they don't use those additional copies for any other purpose. That same section also gives you the right to make backups as long as you destroy them when giving up ownership of the original.
Let's assume it's a really short book – say a poem – and by reading it, I accidentally memorized it. Have I now violated copyright?
I think something does not add up with this logic.
https://www.searchengineworld.com/perplexity-responds-to-clo...
It has been abuduntly clear that AI companies can train however they want, and nobody will enforce anything.
Realistically speaking, even if you could prove someone misused your software as per this license, I don't expect anything to happen. Sad but true.
At this point, I don't care about licensing my code anymore, I just want the option to block it from being accessed from the US, and force its access through a country where proper litigation is possible.
The copyright lobby wrote the EU's AI Act, which force them to publishing the list of the copyrighted works used as training data. This is an ebntrypoint to then ask them some money.
You've already violated section 1(b) by having a AI parse it, which is technically covered in fair use doctrine.
This makes it more of a philosophical statement than a functional legal instrument.
Lots of well-tested software was produced without any kind of AI intervention. I hope that continues to be true.
A UI automation script, is arguably an autonomous agent.
Easier to avoid this license than get into some philosophical argument.
It's actually very useful for bots to crawl the public web, provided they are respectful of resource usage - which, until recently, most bots have been.
The problem is that shysters, motivated by the firehose of money pointed at anything "AI", have started massively abusing the public web. They may or may not make money, but either way, everyone else loses. They're just ignoring the social contract.
What we need is collective action to block these shitheads from the web entirely, like we block spammers and viruses.
I have a feeling that would be hard to do in such a way that it accomplishes what the author is trying to accomplish.
I wonder if the first people who saw proprietory webservices using GPL code which the community wrote which makes it easy for them / faster to build (similar to AI) ,think, I will just license my code to forbid to be in the use of any proprietory webservices (Its called AGPL)
There are other licenses like ACAP (https://anticapitalist.software/) etc.
Some of these aren't foss OSI compliant but honestly why does it matter if I am creator or I am thinking of licenses y'know?
Like its my software, I wrote it, I own the rights, so I am free to do whatever I want with it and if someone wants to write a HOPL software, then yeah its in their rights but I just don't like when our community sometimes tries to pitch fork people for not conforming to what they feel like providing commentary onwards
I am not trying to compare GPL with HOPL but I am pretty sure that GPL must have been ridiculed by people in the start, Someone with knowledge please let me know and provide some sources on it as I am curious about it as to what the world reacted when GPL/FSF/ the notion which I think most of you know about was born and unleashed into the world, I am curious how the world reacted and maybe even some personal experiences if someone went through that era, I would appreciate that even more in which words wouldn't count as I think it was a really transformative moment for open source in general.
This is both interesting but at the same time IANAL but I have a question regarding the backends system
Suppose I have an AGPL software, think a photo editing web app and any customer then takes the photo and reshapes it or whatever and get a new photo, now saying that the new photo somehow becomes a part of AGPL is weird
but the same thing is happening here, if a backed service uses it, my question is, what if someone creates a local proxy to that backend service and then the AI scrapes that local proxy or think that someone copies the output and pastes it to an AI , I don't understand it since I feel like there isn't even a proper definition of AI so could it theoretically consider everything automated? What if it isn't AI which directly accesses it
Another thing is that it seems that the backend service could have a user input, think a backend service like codeberg / forejo / gitea etc.
if I host a git server using a software which uses hopl, wouldn't that also inherently somehow enforce a terms and condition on the code hosted in it
This seems a genuinely nice idea and I have a few interesting takes on it
Firstly, what if I take freebsd which is under permissive BSD iirc, try to add a hopl license to it (or its equivalent in future?) and then build an operating system
Now, technically wouldn't everything be a part of this new human only bsd (Hob) lol, and I am not sure but this idea sounds damn fascinating, imagine a cloud where I can just change the operating system and just mention it like proudly on HOB and it would try to enforce limits on AI
What I am more interesting about is text, can I theoretically write this comment under human only public license?
What if I create a service like mataroa but where the user who wants to write the blog specifies that the text itself would become hopl, as this can limit the sense of frustration on their part regarding AI knowing that they are trying to combat it
Also I am not sure if legally speaking this thing could be done, it just seems like a way so that people can legally enforce robots.txt if this thing works but I have its questions as I had shared, and even more
It would be funny if I wrote things with AI and then created a HOPL license
something like HOPL + https://brainmade.org/ could go absolutely bunkers for making a human interacts with human sort of thing or atleast trying to achieve that. It would be a fun social experiment if we could create a social media trying to create this but as I said, I doubt that it would work other than just trying to send a message right now but I may be wrong, I usually am
GaryBluto•8h ago
Not to mention that all you'd need to do is get an LLM to rewrite said programs just enough to make it impossible to prove it used the program's source code.
GaryBluto•8h ago
blamestross•8h ago
agreed that this isn't the solution.
Imustaskforhelp•4h ago
It wasn't as if the machines were looking at every piece of cloth generated by the worker pre revolution without/ignoring their consent. Like, there is a big difference but I personally don't even think that their view-point of resisting change should be criticised.
I think its fair to resist change, since not all change is equal. Its okay to fight for what you believe in as long as you try to educate yourself about the other side's opinion and try to put it all logically without much* biases.
cratermoon•6m ago
fvdessen•8h ago
I am not against AI, I use it every day, I find it extraordinarily useful. But I am also trying to look ahead at how the online world will look like 10 years from now, with AI vastly better than what we have now.
It is already hard to connect online with people, as there is so much commercial pressure on every interaction, as the attention they create is worth a lot of money. This will probably become 100x worse as every company on the planet will have access to mass ai powered propaganda tools. Those already exist by the way. People make millions selling AI tiktok tools.
I'm afraid at some point we'll be swamped by bots. 99% of the content online will be AI generated. It might even be of better quality than what we can produce. Would that be a win ? I'm not sure. I value the fact that I am interacting with humans.
The protection we have against that, and the way it's looking to progress towards, is that we'll depend on authorities (official or commercial) to verify who's human or not. And thus we'll be dependent on those authorities to be able to interact. Banned from Facebook / X / etc ? No interaction for you, as no website will allow you to post content. Even as it is I had to gatekeep my blog comments behind a github account. This is not something I like.
I think it's worth looking at alternative ways to protect our humanity in the online world, even if it means remaining in niches, as those niches have value, at least to me. This post and this license is one possible solution, hopefully there are more
GaryBluto•8h ago
I'm afraid that ship has sailed.
>I think it's worth looking at alternative ways to protect our humanity in the online world, even if it means remaining in niches, as those niches have value, at least to me. This post and this license is one possible solution, hopefully there are more
While I appreciate the sentiment, I think anybody willing to create armies of bots to pretend to be humans are unlikely to listen to a software license, nor operate within territories where the law would prosecute them.
fvdessen•8h ago
dylan604•8h ago
Imustaskforhelp•5h ago
I don't think that the advice we should give to people is to just wait and watch. and if someone wants to take things into their own hands, write a license, reignite the discussion, talk about laws, then we should atleast not call it naive since personally I respect if someone is trying to do something about anything really, it shows that they aren't all talks and that they are trying their best and that's all that matters.
Personally I believe that even if this license just ignites a discussion, that itself can have compounding effects which might rearrange itself into maybe a new license or something new as well and that the parent's comments about discussions aren't naive
is it naive to do a thing which you (or in this case someone else) thinks is naive yet at the same time its the only thing you can do, personally I think that this becomes a discussion about optimism or pessimism with a touch of realism
Its answer really just depends on your view-point, I don't think that there is no right or wrong and I respect your opinion (that its naive) no matter what as long as you respect mine (that its atleast bringing a discussion and its one of the best things to do instead of just waiting and watching)
dylan604•4h ago
This is the real world. Being this "optimistic" as you say is just living in a fantasy world. Not calling this out would be just be bad.
Imustaskforhelp•3h ago
Since, Although I like people criticisizing, since in their own way, they care about the project or the idea. But, still, Maybe I am speaking from personal experiences but when somebody shot down my idea, maybe naive even, I felt really lost and I think a lot of people do. I have personally found that there are ways to use this same thing to steer the direction towards a thing that you might find interesting/really impactful. So let me ask you, what do you think is that we can do regarding this situation, or the OP should do regarding his license?
I personally feel like we might need govt. intervention but I don't have much faith in govt.'s when they are lobbied by the same AI people. So if you have any other solution, please let me know as it would be a pleasure if we could discuss about that.
If you feel like that there might be nothing that we can do about it, something that I can also understand, I would personally suggest to not criticize people trying to do something but that's a big If, and I know you are doing this conversation in good faith, but I just feel like we as a human should keep on trying. Since that is the thing which makes us the very human we are.
dylan604•2h ago
tpmoney•8h ago
It also assumes it can make some bright line distinction between "AI" code completion and "non-AI" code completion utilities. If your code completion algorithm uses the context of your current file and/or project to order the suggestions for completion, is that AI? Or does "AI" only mean "LLM based AI" (I notice a distinct lack of terms definitions in the license). If it only means "LLM" based, if some new model for modern AI is developed, is that OK since it's no longer an LLM? Can I use the output to train a "Diffusion" model? Probably not, but what makes a diffusion model more forbidden than feeding it into a procedural image generator? If I used the output of a HOPL licensed software to feed input into a climate simulation is that allowed even if the simulator is nothing more than a series of statistical weights and value based on observations coded into an automatic system that produces output with no human direction or supervision? If I am allowed, what is the line between a simulation model and an AI model? When do we cross over?
I am constantly amazed at the bizzaro land I find myself in these days. Information wanted to be free, right up until it was the information that the "freedom fighter" was using to monetize their lifestyle I suppose. At least the GPL philosophy makes sense, "information wants to be free so if I give you my information you have to give me yours".
The new "AI" world that we find ourselves in is the best opportunity we've had in a very long time to really have some public debate over copyright specifically, IP law in general and how it helps or hinders the advancement of humanity. But so much of the discussion is about trying to preserve the ancient system that until AI burst on to the scene, most people at least agreed needed some re-working. Forget "are we the baddies?", this is a "are we the RIAA?" moment for the computer geeks.