The first line of the article:
> Support for BPF in the kernel has been tied to the LLVM toolchain since the advent of extended BPF.
Should the article also explain which kernel they're referring to, what LLVM is and stands for, and highlight the differences between BPF and extended BPF? Or are they allowed to expect a motivated reader to do a cursory web search to fill in the gaps in their knowledge?
For example in this case: "eBPF is a method for user space to add code to the running Linux kernel without compromising security. They have been tied [...]. The GNU toolchain, the historical and still by many preferred system to build Linux currently has no support.
The description of what LWN and Linux is would be in the about page linked in the article.
It costs almost nothing for an expert to skim/skip two sentences while saving loads of time for everyone else.
The article is also completely missing motivation (why do we care whether BPF is supported in the second toolchain?) Which would be helpful for almost everyone, including people who think it is obvious.
Edit: To be clear though, I love LWN. But the articles are very often missing important context that would be easy to add that I suspect would help a large portion of the reader base.
In this case, BPF (shorthand for eBPF), stands for Extended Berkley Packet Filter. It’s a relatively new feature in the kernel that allows attaching small programs at certain “hook points” in the kernel (for example, when some syscall is called). These programs can pass information into userspace (like who is calling the syscall), and make decisions (whether to allow the call to proceed).
More info here https://ebpf.io/what-is-ebpf/
Additionally, spelling out "Berkeley Packet Filter" is not going to help any readers here; BPF is far removed from the days when its sole job was filtering packets, and that name will not tell readers anything about why BPF is important in the Linux kernel.
It's a safe script that has access to part of the kernel and that unlocks a lot of monitoring. You could use a kernel module that's much unwieldy, error-prone etc.
How correct am I ?
ggm•2d ago
mustache_kimono•1d ago
FSF considers Apache 2 incompatible with the GPL2 because of its "additional conditions".
I happen to agree with you that, at the very least, we haven't fully grappled with the fact that FOSS, like the Linux, is published to the Internet, and freely available to read, by anyone. Obviously, there should be a distinction between reading and copying, just like there is a distinction between reading and copying a literary work.
The issue -- as I see it -- is that many GPL fanatics just don't see it the same way. I believe Linus has even opined that any filesystem which was developed after Linux, whose developers are aware of Linux, could be considered a "derived work". This is of course ridiculous, but the GPL, if read without due care and funneled through social media slop, can be ridiculous.
jcelerier•1d ago
loeg•1d ago
bayindirh•1d ago
This is why Black Box Reverse Engineering Exists.
Same is true for console reverse engineering. No self respecting reverse engineer reads code leaks from official console development. Otherwise they'd be in a legal hot water.
This is serious stuff and there's no blurry line in this.
psychoslave•1d ago
bayindirh•1d ago
If it emits a large block of copyrighted material, you'll be again in legal hot water.
Considering even fair-use can be abused (see what GamersNexus is going through) at-will, it looks even more bleaker than at first glance.
bawolff•1d ago
I think the only unreasonable part is llm companies are implicitly or sometimes explicitly advertizing their products output as being fit for use in other projects. I think that is a false advertising problem.
netbsdusers•21h ago
schoen•21h ago
https://en.wikipedia.org/wiki/Substantial_similarity
(usual factual elements in determining the possibility of a copyright infringement in U.S. law).
I agree with you that it's possible in principle that copyright infringement would not be found even when there was evidence of access. But I think the courts would usually give the defendant a higher burden in that case. You can see in the Wikipedia article that there has been debate about whether access becomes more relevant when the similarity is greater and less relevant when the similarity is less (apparently the current Ninth Circuit standard on that is "no"?).
bawolff•1d ago
As an example, if you take a painting someone else made, and try and make your own version using the original as a reference, that is probably subject to the original author's copyright. On the other hand if you both happen to paint the same sunset its all ok.
I think you're more stating how you would like copyright to work, not how it actually does.
loeg•1d ago
Nope. This is just how it works. I don’t care one way or another.
ggm•1d ago
It's a really low bar to avoid, tbh. The point is that people have hobbies. And aspects of this work can look like a hobbyist "but i don't want to do it that way" view.
As a consumer of compiler products it doesn't have to matter to me, nor as a user of compilers. It's only observations reading the comments and the article which brought this to mind: llvm is proof by example and is a different kind of open source, it's not a barrier I would struggle to pierce, for my own personal view of code licences.
(I'm old enough to have read the gnu manifesto when it first published btw)
mustache_kimono•1d ago
I have the feeling you're arguing against and about something I never said.
To clarify, I'll restate: "I believe Linus has even opined that any filesystem which was developed after Linux, whose developers are aware of Linux, could be considered a 'derived work'. [The view that any new filesystem, simply aware of, but created independent of, and after Linux, is a derived work of Linux] is of course ridiculous,..."
> It's why black box reverse engineering exists and is generally legal, while white box reverse engineering is generally illegal.
Oh, I agree a clean room implementation is generally the best legal practice. I am just not sure there are cases on point that always require a clean room implementation, because I am aware of cases which expressly don't require clean room implementations (see Sony v. Connectix and Sega Enterprises Ltd. v. Accolade, Inc). And, given the factual situation has also likely changed due to FOSS and the Internet, I am saying some of these questions are likely still open, even if you regard them as closed.
LegionMammal978•1d ago
Software generally receives wide protection for its 'non-literal elements', but it's not the case that every possible iota of its function falls under its protectable expression. Indeed, plenty of U.S. courts have subscribed to the "abstraction-filtration-comparison test" which explicitly describes how some characteristics of a program can be outside the scope of its protection.
But in practice, many of the big software copyright cases have been about literal byte-for-byte copying of some part of a program or data structure, so that the issue comes down to either contract law or fair-use law (which Sega v. Accolade and Google v. Oracle both fall under). The scope of fine-grained protection for 'non-literal elements' seems relatively unexplored, and the maximalist interpretation has seemingly proliferated from people wanting to avoid any risk of legal trouble.
ggm•1d ago
tuna74•8h ago
rapidlua•1d ago
Let me provide some context here. These annotations aren’t there to help the compiler/linter. They exist to aid external tooling. Kernel can load BPF programs (JIT-compiled bytecode). BPF can invoke kernel functions and also some kernel entities can be implemented or augmented with BPF.
It is paramount to ensure that types are compatible at the boundaries and that constraints such as RCU locking are respected.
Kernel build records type info in a BTF blob. Some aspects aren’t captured in the type system, such as rcu facet, this is what the annotations are used for. The verifier relies on the BTF.
ajross•1d ago
As others are pointing out, rigorous application of copyright precedent argues in the other direction.
But I agree that it's really sad to see this is where we are in the community. The Apache license isn't some crazy monstrosity, it's literally free software per the FSF! It's "additional requirements" that the GPLv2 bumps into are things like the patent grant that we all agree are good things. And it's not incompatible with GPLv3!
Yet no one can work together. GPLv2 projects won't relicense to Apache or GPLv3, GPLv3 proponents won't link to GPLv2, corporate sponsors have refused to use GPLv3 at all. Everyone looks at these historical warts and incompatibilities as fortress walls around their own worlds and has forgotten that the only reason these licenses exist in the first place is that we all agree (or used to) that software is better when we can all share it.
But apparently it's not, because Everybody Else wants to share it in the Wrong Way.
I feel very old sometimes.