You are not logged in.
Whatever you think of AI, if one is honest, then one has to respect those Claude reviews. -- Lennart Poettering
Source: https://github.com/systemd/systemd/issu … 4053443496
Offline
AI is problematic in many ways, but it's not a binary thing
There's a difference in whether you intend to use it as a tool to help you or to blindly rely on it.
In an ideal world the AI, triggered by you, would tell you "this looks fishy and I think you should really look into that" and then you look into the situation and either reject or deny the suspicion.
In reality the problem is that braindead venturers promt the LLM "find me a CVE that can apply for a bug bounty program" and then post thousands of CVEs in the shallow hope that one sticks - completely exhausting the human bandwidth on the receiving end.
=> "No. And if you post AI slop CVEs we're gonna sue you for harassment!"
Another, mendable problem is "user wants feature and uses LLM to write a shitty patch and then posts a merge request"
The solution to that is to see whether you can pick up the user from there and get them into transforming that into a usable patch - maybe they even learn something (and how to use their own brain) on the way.
If not you can decide to implement the feature yourself of drop it.
The world is a complex place. Things are ambivalent.
If you make me defend lennart again, I'm gonna sue you for harassment ![]()
Offline
The link of github now is down :C
In reality the problem is that braindead venturers promt the LLM "find me a CVE that can apply for a bug bounty program" and then post thousands of CVEs in the shallow hope that one sticks - completely exhausting the human bandwidth on the receiving end.
I think if I'm not remembering badly, that was the case of Curl, which was like in Hacker one bounty program, but received so many LLMs based brain dead stupid people "hi I wanna be hacker" that they were like, okey that's enough and drop the bounty, not sure if it was hacker one, but you get the point.
In an ideal world the AI, triggered by you, would tell you "this looks fishy and I think you should really look into that" and then you look into the situation and either reject or deny the suspicion.
Yeah, but in the end as you said it always end in "Please use your brain, is not bad" situation ![]()
str( @soyg ) == str( @potplant ) btw!
Also now with avatar logo included!
Offline
Lennart wrote:The world is a complex place. Things are ambivalent.
If you make me defend lennart again, I'm gonna sue you for harassment
Contempt of the court. As we call it here.
Offline
and the result of using ai is this:
https://github.com/systemd/systemd/issues/41098
breaking systemd-boot... are these people even serious? of course 2 microsoft employees would encourage this.. Lennart, and Luca
SERIOUSLY, not impressed with the spam of systemd releases, these people need to realize that ai is not ready and never will be ready
Last edited by system72 (Yesterday 23:15:03)
Offline
It's more like there're two embarrassing bugs
1. if (!err) => if (err != EFI_SUCCESS) inverts the behavior (the function only returned an error when there was none, EFI_SUCCESS is 0)
2. unconditional deref of a conditional pointer
The AI found (1) but not the now exposed (2) - despite the pattern of ret_measured constantly being guarded in that function.
The AI isn't as good at code reviews as lennart asserted (though that's probably a matter of perspective…) but actually not the main problem here.
Offline