

I imagine it has plenty of use cases for blue team as well, just not as many for active threat response.


I imagine it has plenty of use cases for blue team as well, just not as many for active threat response.


The risk of that is relatively low for kernel contributions, though. Most of the work being done is porting existing protocols/firmware into the latest Linux kernel, not creating novel features.
The larger risk is instability caused by bad, hallucinated code because it was submitted under the assumption of human authorship. In both cases, further review by the Linux team can be done if they understand where that code is coming from.
Banning AI does nothing, because theres no way of knowing who uses it without proper disclosure, which wouldnt happen if it were banned. To use an example from the article, it would be like banning code written with the use of a specific brand of keyboard.
Better to have it properly disclosed than to make it illicit


That would be true even if they didn’t use AI to reproduce it.
The problem being addressed by the Linux foundation isn’t the use of copyrighted work in developer contribution, it’s the assumption that the code was authored by them at all just because it’s submitted in their name and tagged as verified.
Does that make sense?


Even if this were true, it would only mean that the GNU license is unenforceable, not that the Linux kernel itself is infringing copyright


Yup
People want to pretend as if everything that flows downstream from the creation of LLMs is illegal, but that’s just not the reality.


The Linux Kernel is under a copyleft license - it isnt being copyrighted.
But the policy being discussed isn’t allowing the use of copyrighted code - they’re simply requiring any code submitted by AI be tagged as such so that the human using the agent is ultimately responsible for any infringing code, instead of allowing that code go undisclosed (and even ‘certified’ by the dev submitting it even if they didnt write or review it themselves)
Submissions are still subject to copyright law - the law just doesnt function the way you or OP are suggesting.


Yup.
I would also just point out that this doesnt change the legal exposure to the Linux kernel to infringing submissions from before the advent of LLMs.


LLMs themselves being products of copyright isnt the legal question at issue, it’s the downstream use of that product.
If I use a copyright-infringing work as a part of a new creative work, does that new work infringe copyright by default? Or does the new work need to be judged itself as to the question of infringing a copyrighted work?
And if it is judged as infringing, who is responsible for the damage done? Can I pass the damages back to the original infringing work? Or should I be held responsible for not performing due diligence?


If you think “bad” is too vague, then that isnt a new problem.
Linux has always had to reject ‘bad’ code submissons - what’s new here is that the kernel team isnt willing to prejudice all AI code as “bad”, even if that would be easier.


That’s not really how copyright law works.


Your average American would be wrong, too.


You’re justifying their executions
Only if you’re incapable of holding two thoughts in your head at the same time.


If your perspective on international anti-government protests is seriously challenged by the involvement of clandestine CIA support, I have really bad news for you
Arming oppositional forces of our foreign adversaries is possibly the most consistent function of the CIA, and yet im supposed to doubt their involvement in the violent outbreak in Iran that happened just as the US was planning to start a war?
Nevermind how untrustworthy Trump is as a messenger - i would be surprised if we weren’t


The important part, though, is that establishment democrats and the consultant class over-value the type of media that participates in this derangement.
So we’ll (again) end up with a democratic establishment that turns their shoulder to leftist independent creators and they will (again) lose and blame those same people


You’re thinking of actual Joe Rogan


Make that 2 fighter jets downed and a blackhawk critically damaged
Kinda, but they’re specifically saying the the AI agent cannot itself tag the contribution with the sign-off - like, someone using Claude Code to submit PRs on their behalf. The developer must add the tag themselves, indicating that they at least reviewed and submitted it themselves, and it wasn’t just an agent going off-prompt or some other shit and submitting it without the developer’s knowledge. This is saying ‘the dog ate my homework’ is not a valid excuse.
The developer can use AI, but they must review the code themselves, and the agent can’t “sign-off” on the code for them.
What does holding any individual responsible on a development team do? The Linux project is still responsible for anything they put out in the kernel just like any other project, but individual developers can be removed from the contributing team if they break the rules and put it at risk.
The new rule simply makes the expectations clear.