You are not logged in.
Hi everyone!
Over the last year or so, I have been contributing packages to the AUR and have found it difficult to follow every single recommendation that is stated in the extensive Wikis perfectly as a beginner. I have therefore attempted to rely on external utilities such as NotebookLM to help me gain a better understanding of the requirements, and verify my packages against them with more certainty. And although they have served me well as "linters", I would prefer not having to copy and paste my files into a chat interface on every change, and I also want to be able to share these helpers with the rest of the community, in the hopes that they may make other beginners' lives easier and improve package quality for us all.
So, recently, I have been playing with local AI agents through the use of tools like OpenCode, and have noticed that when integrated with MCP servers and Skills, they become quite powerful, and are able to follow strict requirements quite well. It was then that I thought of mocking up Skills specifically designed to aid with the creation of AUR packages, and have carefully crafted them, feeding them all the necessary documentation and verifying that everything that is stated the SKILL.md files follows the guidelines to a tee.
I am therefore looking for feedback from the community, to see what you think about this idea, and whether you believe it's something that could truly come in handy for the creation and betterment of AUR packages all around, or whether it's something that is likely to do more harm than good. Here is a link to the GitHub repository with these skills. You can find the installation and usage instructions there.
Additionally, I would like to preemptively address concerns which I can envision being brought up:
1. This will result in more AI slop:
The main purpose of these skills is to reduce exactly that. New users are very likely to rely on AIs for the verification of their PKGBUILDs, which are notoriously innacurate, even if you try feeding them the right info. We cannot prevent this, but with these skills, they can give their AI agents a higher chance at following the guidelines correctly, and hallucinating less. And although it may result in more users actually attempting to push packages to the AUR, those that are well-versed enough in technology to run an AI agent with these skills are likely to take it more seriously and therefore not be as problematic.
2. It will make it easier to mask malware:
Yes. It will be easier to create PKGBUILDs with malware that look innocent on the surface, and follow the guidelines perfectly. But, this is textbook "appeal to consequences", and has been said for basically every technological advancement known to man.
3. What if the guidelines change?:
Since the repository is public, I encourage pull requests that modify existing skills with any changes people may see fit. Paradoxically, this would likely result in more people actually following the most recent changes to the guidelines, as instead of having to re-read all the wiki pages, they will simply have to update the skills, and their AI agents will do the reading for them.
4. This will result in fewer people reading the actual Wiki pages:
People who don't read the guidelines manually and just use the skills directly probably wouldn't have paid much attention to the guidelines anyways, or at least not enough to keep more of them in mind when working on their package than an AI agent would. So, hopefully this results in higher quality packages being pushed to the AUR overall.
I appreciate both positive and negative feedback on this matter, and hope that it will be an interesting discussion for all. Thank you for reading.
Online
I'm not going to bother mentioning the quality/correctness issues with LLMs because that's been discussed to death.
If you're not willing and able to read lots of text and understand it, Arch is not the distro for you. The correct solution for such people is to direct them to a distro more suited to their preferences and skill level. One of the main appeals of Arch is that you learn about computers and GNU/Linux whilst using it. Reading wiki articles is a huge part of this. If you (try to) automate this away, why are you using Arch?
I'm also sceptical of the inevitability your wording implies, like we have no choice but to give in to slop. If people refuse to RTFM that's their problem, it's not the Arch community's responsibility to cater to them.
"Don't comment bad code - rewrite it." - The Elements of Programming Style (1978), Brian W. Kernighan & P. J. Plauger, p. 144.
Offline
I'm not going to bother mentioning the quality/correctness issues with LLMs because that's been discussed to death.
If you're not willing and able to read lots of text and understand it, Arch is not the distro for you. The correct solution for such people is to direct them to a distro more suited to their preferences and skill level. One of the main appeals of Arch is that you learn about computers and GNU/Linux whilst using it. Reading wiki articles is a huge part of this. If you (try to) automate this away, why are you using Arch?
I'm also sceptical of the inevitability your wording implies, like we have no choice but to give in to slop. If people refuse to RTFM that's their problem, it's not the Arch community's responsibility to cater to them.
You do definitely raise very valid concerns, and I agree with your point that nothing can replace actually reading a manual. The problem is that, in practice it is very common for people to not RTFM, or at least not read it well enough to avoid making very common and simple mistakes.
In my experience working with maintainers on different packages, it's more often than not that you find some sort of violation of the AUR guidelines exhibited in even the most carefully crafted packages, let alone those made by the average "indie" developer. I have committed heaps of errors, misunderstood a lot of statements, and missed crucial pieces of information while working on packages, simply due to the sheer amount of recommendations that need to be considered simultaneously to create the "perfect" package. Oftentimes we as humans forget things or are simply too lazy to check - that's a flaw as uncorrectable as LLM's quality/correctness issues.
On the flipside, with a toolset like the one I'm proposing, AI models can be sort of "fine-tuned" to follow guidelines more closely and carefully, which doesn't mean replacing the knowledge one must obtain to achieve results of a high standard, but rather serving as an intermediary validator that verifies whether all said knowledge has been applied correctly. These skills are not intended for creating a full package from scratch, but rather as a scaffolding or auditing tool, aimed at more experienced programmers, who are serious and knowledgeable enough to go through the effort of actually setting up an agent with these skills. I believe that target demographic is less likely to ignore the Wiki completely and rely solely on AI, and they would likely benefit from having a "linting" tool at their disposal, in the form of an AI agent that double-checks their work for them. But again, there's also always the possibility of abusing the shiny toy once it's in your hands.
And on the topic of "giving in to slop": ever since ChatGPT's inception, I have been very skeptical of its use, and tried to avoid LLMs whenever I could. But, being a 4th year CS student, I cannot name a single classmate who does not abundantly rely on AI for a considerable chunk of their workload. It is simply the reality of where the world is going, as AIs get better and better, and these upcoming generations of programmers will be contributing more and more to Open Source projects, and making packages. Of course, what you said is true - it is their problem if they make slop, but at the end of the day, those that will be suffering are the end users. Therefore, my hope is to, if we can't force people to think on their own, make their agents at least think a little better, and who knows if someone will actually be able to extract tangible value from it, as I have in the past.
I apologize for the wall of text, I just have a lot of thoughts on this topic and think that it's a very important discussion to have.
Online
let me correct your problem by turn your issue upside-down: you found an issue - ok; you asked ai about it - already not ok anymore; you xame up with an ai craft you another ai tool to bloat crap because're too lazy rtfm - super not ok, bro
and the last reply reads like an ai response - stoped reading after the first paragraph
as 2^8 noted: here you is the problem - not the wiki
if you lack the skill to proper contribute to the aur you should rather not
otherwise when you encounter an issue with whats written in the wiki come here and asked comprehently: "i have an ussue with X, here's what i tried and here's my result"
also: if you found issues with other packages: open an issue and the repo and let the maintainer know about so they can fix it - just babbling about "yea, i find it difficult and it seems others struggle, too" doesn't help anybody
if you want a challenge: LFS 13 just released - i'll give it a shot over the weekend
Offline
I hoped this was an attempt to use AI to help humans write better packages, but it looks like the focus is on automating the creation of packages .
For clarity :
Are the agents used as tools to check PKGBUILDS and give recommendations for issues ?
Do these agents run local or through 3rd party services ?
Disliking systemd intensely, but not satisfied with alternatives so focusing on taming systemd.
clean chroot building not flexible enough ?
Try clean chroot manager by graysky
Offline
let me correct your problem by turn your issue upside-down: you found an issue - ok; you asked ai about it - already not ok anymore; you xame up with an ai craft you another ai tool to bloat crap because're too lazy rtfm - super not ok, bro
and the last reply reads like an ai response - stoped reading after the first paragraphas 2^8 noted: here you is the problem - not the wiki
if you lack the skill to proper contribute to the aur you should rather nototherwise when you encounter an issue with whats written in the wiki come here and asked comprehently: "i have an ussue with X, here's what i tried and here's my result"
also: if you found issues with other packages: open an issue and the repo and let the maintainer know about so they can fix it - just babbling about "yea, i find it difficult and it seems others struggle, too" doesn't help anybody
if you want a challenge: LFS 13 just released - i'll give it a shot over the weekend
I appreciate your feedback. It may indeed be that I simply have my priorities backwards, though I believe that you may have also slightly misinterpreted my statements. I do not mean to insinuate that the wiki is wrong in any way, I am actually trying to do the exact opposite, as I feel like it's an incredibly useful resource that people may be misusing by trying to feed it into their AI agents. These skills attempt to feed that information to them in a more structured and "appropriate" manner.
I am also not saying that I would be the target user of this tool - the reason I am sharing it with the world is in the hopes of reducing the amount of errors I have to fix whenever I work with others, as so far there have been a lot of friction points I've encountered that could have been avoided had people paid more attention to the Wiki or perhaps used a tool like the one I'm proposing, hence why I'm asking for feedback on it. Though, you are right to suggest that I am part of the problem, as I have relied on this type of tools in the past. But, at the same time, they have helped me create packages which I hope hold up to a high standard, and have managed to provide a lot of value for their users.
Lastly, it saddens me that you think that my reply is AI-generated, as I would value the opinion of someone clearly as knowledgeable as you greatly on what I said there in my regular writing style. Seems like formality and soft-spokenness are attributed to laziness/ineptitude nowadays...
Online
I hoped this was an attempt to use AI to help humans write better packages, but it looks like the focus is on automating the creation of packages .
For clarity :
Are the agents used as tools to check PKGBUILDS and give recommendations for issues ?
Do these agents run local or through 3rd party services ?
You are right to think that it is an attempt to help humans write better packages, as these skills are focused primarily on verifying the PKGBUILD structure and the adherence to guidelines, more so than giving instructions on how to make a package from scratch. I have simply not restricted them to a single functionality as to test the extent of the capabilities that can be reached, but I am open to rewriting them to only support pure auditing.
Now, to clarify your questions, when dispatching the aur-guides skill on an existing PKGBUILD, it is explicitly told to check for correct formatting, and references the relevant websites for the AI agent to automatically access (if it has that ability), ensuring that as many of the guidelines are met as possible. These skills can be integrated into any AI agents, both local and external, through a variety of tools, including but not limited to Claude Code, OpenCode and Cursor.
Online
both replies read like AI - reporting as potential spam - maybe some mod might spent the effort to further check this
Offline
both replies read like AI - reporting as potential spam - maybe some mod might spent the effort to further check this
Sure does. Noted.
Last edited by ewaller (Today 00:16:00)
Nothing is too wonderful to be true, if it be consistent with the laws of nature -- Michael Faraday
The shortest way to ruin a country is to give power to demagogues.— Dionysius of Halicarnassus
---
How to Ask Questions the Smart Way
Offline
both replies read like AI - reporting as potential spam - maybe some mod might spent the effort to further check this
Sure does. Noted.
Guys, this is just how I speak. I am simply accustomed to expressing myself in an academic manner. I do not feel the need to dumb myself down, just because this sort of language is now being associated with AI writing. You can run my responses through whatever AI checking tool you want and it will tell you that it's not generated. Don't judge something by "how it reads", get quantifiable proof of it first.
And besides, you are deviating from the topic at hand. Let's discuss what this forum post was actually drafted for, instead of jumping to conclusions and throwing out false accusations on a completely unrelated manner.
If you want me to speak less formally, or write shorter responses, I can do so, just let me know. But I'm frankly disappointed by your attitude to something that I was quite interested in simply having a normal conversation about.
Online
I also think that the posts read like LLM output. It's a vibe that's hard to pin down on any one trait. Maybe you've spent so much time interacting with LLMs that you've started to pick up on their style subconsciously. The most LLM-esque trait of your posts, to me, is that most of them start with "You're absolutely right about everything and you're such a genius" followed by disagreeing with everything the other person said.
Also, I tend to ignore reassurances that LLM output is checked or that, for whatever science-y sounding reason, the output will be high-quality, because usually when I see these assurances the output ends up being the same as any other LLM output anyway.
"Don't comment bad code - rewrite it." - The Elements of Programming Style (1978), Brian W. Kernighan & P. J. Plauger, p. 144.
Offline
please don't contribute to the aur with llm slop, thank you..
Offline
I also think that the posts read like LLM output. It's a vibe that's hard to pin down on any one trait. Maybe you've spent so much time interacting with LLMs that you've started to pick up on their style subconsciously. The most LLM-esque trait of your posts, to me, is that most of them start with "You're absolutely right about everything and you're such a genius" followed by disagreeing with everything the other person said.
Also, I tend to ignore reassurances that LLM output is checked or that, for whatever science-y sounding reason, the output will be high-quality, because usually when I see these assurances the output ends up being the same as any other LLM output anyway.
It's not that I've picked up on how LLMs write, it's that LLMs were trained on writing styles like mine. Humans habe been writing like this for way longer than AI have, you know? If you take a look at my accounts on other platforms, you will be able to clearly see that this sentence structure is one that I've developed over the course of my life, and have been using since way before 2022.
The structure is simply:
1. Confirm that you've recognized the other person's argument.
2. Provide your response in a concise manner.
3. Elaborate with additional context to cement your points and give additional counterarguments.
This is a style that I was taught for essay writing in my high school subjects, and I found it effective on the internet, as (1) alleviates any potential tension, since I confirm to the person that I'm actively listening to them, (2) gives a direct response, and (3) reasons around this response to provide proof.
Since I don't typically speak on the internet unless I've got something useful to say, I haven't developed a "forum" writing style, and would just use my usual "essay" style. Perhaps it isn't the most appropriate way to communicate on here though, but I would prefer not to have to write in a more blunt and abrupt manner unless completely necessary, as I believe it would rob my responses of humility, which is something that's already quite scarce...
And now, to respond to your argument in a less LLM-y way, in my experience it's almost impossible to make an AI produce perfect output. Even while testing out these skills, depending on the model I was using, it would make more or fewer mistakes, so it's not a magic pill that makes them suddenly follow the guidelines to a tee. The value in them is mostly for bringing your attention to something you may have missed, because feeding my PKGBUILD into ChatGPT I was told that it was sound. Plonking the same one into an agent with these skills, I was told that my email was not obfuscated, the provides flag needed to have the "-git' part stripped, and the CMAKE destination had to be $srcdir. It hasn't replaced my work, it just checked it in a more reliable manner than a normal model would, basically acting as a linter. Other, smaller/older models have made mistakes though, which is why I'm asking for feedback.
Online
AI is still very distrusted and it trows us, the Arch user under the bus.
Reasons for this distrust can be found all over the Arch forums, by the way!
Most Archers use Arch to get better in Linux along the way, some are masters in it and some hobbyist like me;) But we do have one thing in common, mostly, we like to work on Arch on our own terms (or with the community OC), not letting the job be done by someone, be it AI. There are always exceptions of-course, but, exceptions tend to proof a common direction, an Archer does the job by her-/himself, or together with the community!
But, introducing an AI that can do the job for you don't seem to not be helpful, counter intuitive even, in most ways. One of the main problems here is new Archers using AI, teh shit that became from that, you know who does the cleaning job, other Archers, community members, NOT some AI let me be clear on that one! Arch is a do it yourself distro to learn Linux and get better at it. Also already mentioned but I say it again, if an AI takes over the job for you in a distro like Arch than where has that learning aspect gone, seem very valid points to not want this in our neighborhood. Arch Linux is and should be a personal and community driven effort, is what I think about it, we are the AI here!
Don't think your somewhat shinny AI skills aren't validated as very hansom and it's showing off you know A way, but, for distros like Red Hat maybe.
Let me rephrase in a completely different way. If you create a AI that can create a shinny new LFS(Linux From Scratch) for me than what's the point LFS even exist in the first place. I think you get MY point! So, if your tool was a tool like Shellcheck(not able to create ITFP) we would be having a very different conversation here, that's for sure. AI should definitely not become a way of life in our little 'Arch' corner of the universe, at least that's what I think about it anyway. I hope your not offended by the way I look at your 'AI skills', but, I also don't care, that may be obvious ![]()
Offline
AI is still very distrusted and it trows us, the Arch user under the bus.
Reasons for this distrust can be found all over the Arch forums, by the way!
Most Archers use Arch to get better in Linux along the way, some are masters in it and some hobbyist like me;) But we do have one thing in common, mostly, we like to work on Arch on our own terms (or with the community OC), not letting the job be done by someone, be it AI. There are always exceptions of-course, but, exceptions tend to proof a common direction, an Archer does the job by her-/himself, or together with the community!
But, introducing an AI that can do the job for you don't seem to not be helpful, counter intuitive even, in most ways. One of the main problems here is new Archers using AI, teh shit that became from that, you know who does the cleaning job, other Archers, community members, NOT some AI let me be clear on that one! Arch is a do it yourself distro to learn Linux and get better at it. Also already mentioned but I say it again, if an AI takes over the job for you in a distro like Arch than where has that learning aspect gone, seem very valid points to not want this in our neighborhood. Arch Linux is and should be a personal and community driven effort, is what I think about it, we are the AI here!
Don't think your somewhat shinny AI skills aren't validated as very hansom and it's showing off you know A way, but, for distros like Red Hat maybe.
Let me rephrase in a completely different way. If you create a AI that can create a shinny new LFS(Linux From Scratch) for me than what's the point LFS even exist in the first place. I think you get MY point! So, if your tool was a tool like Shellcheck(not able to create ITFP) we would be having a very different conversation here, that's for sure. AI should definitely not become a way of life in our little 'Arch' corner of the universe, at least that's what I think about it anyway. I hope your not offended by the way I look at your 'AI skills', but, I also don't care, that may be obvious
Thank you very much for the constructive criticism! No, I'm not offended at all - to be honest the responses that have stayed on track have been quite educational, and it's exactly what I was looking for.
The general consensus seems to be "don't let the AI make stuff from scratch". Although that wasn't really the intention with these skills, due to AI models' non-determinism, there is no guarantee that they won't be autonomously rewriting packages, even if told explicitly to simply act a checker.
And since the community generally likes to steer away from AI, it seems like the target audience of such a tool may just be a little too niche to make a positive impact, while those that already make slop will feel more empowered. I will attempt to rewrite the skills to act purely as a linting tool, similar to the aforementioned shellcheck, but needless to say, it seems like the project is better shelved, at least for the moment.
I appreciate all your guys' time and input greatly, and apologize to all those who feel like their time has been wasted!
Online
I found this thread interesting.
I've had to deal with troubles caused by bad AI advise a few times, but also have encountered situations where the AI answer was very close to the correct answer.
Unfortunately the latter is very rare and my impression is that's inherent to the LLM design .
Disliking systemd intensely, but not satisfied with alternatives so focusing on taming systemd.
clean chroot building not flexible enough ?
Try clean chroot manager by graysky
Offline
I found this thread interesting.
I've had to deal with troubles caused by bad AI advise a few times, but also have encountered situations where the AI answer was very close to the correct answer.
Unfortunately the latter is very rare and my impression is that's inherent to the LLM design .
I'm glad to have brought value to the table! And I agree. At the end of the day LLMs are probabilistic, so there's no way to guarantee a correct output as far as I understand. The only way to truly reduce its failure rate is through adjusting the weights via fine-tuning, increasing CoT, or Daisy chaining various LLMs to achieve a sort of "thinker-checker" workflow, but those simply bring it from "wrong most of the time" to "mostly right" for complex problems.
The only AI tool I have truly found useful and quite reliable for this sort of purpose is NotebookLM. Instead of coming with pre-trained data and making up responses, it tries to cite and paraphrase the sources that you provided it with every piece of information it gives. It's very useful as a quick data search-up tool: just tell it what you're looking for and it indexes your sources, providing references to the exact citations, so you can look up relevant information quite efficiently, even if you do not trust its digest.
Online
As a principle question:
The problem is that, in practice it is very common for people to not RTFM, or at least not read it well enough to avoid making very common and simple mistakes.
Mistakes that aren't covered by namcap or shellcheck?
What makes you think people™ to lazy to read up the PKGBUILD/AUR basics should™ be maintaining AUR packages itfp?
The AI might help them w/ the "how" but the "that" still has to be provided by them.
If it's just easily generated AI content I might easily forget about that whim tomorrow, no?
it seems like the target audience of such a tool may just be a little too niche to make a positive impact, while those that already make slop will feel more empowered
So the preference would be to have a tool that AI generates PKGBUILDs and the justification is that your AI slop is simply better™ than the other AI slop? Because trust me bro?
The AUR is moderated by humans, they might actually benefit from AI assisting them in filtering out cruft, but at the end of the day there's probably no interest in an AI deleting packages (ie. autonomous AI moderation censorship)
With that in mind the main concern must be to avoid flooding the AUR tasks - either w/ auto-generated content or even auto-generated requests (there've been incidents w/ mass deletion sprees on AUR packages and angry maintainers expressing their frustration with that)
Tools that point out flaws in PKGBUILDs are probably useful - whether simple pattern checks or LLMs.
Tools that end up "autonomously rewriting" drivel, that vaguely looks like it could be a PKGBUILD, to end up w/ a formally correct, but actually completely unvettted PKGBUILD of questionable value run a huge risk to overload the AUR maintanance.
--
Fwwi, slightly off topic and on style: people might have concluded that you're posting AI generated content because it's rather elaborate but very shy on substance.
In my experience working with maintainers on different packages, it's more often than not that you find some sort of violation of the AUR guidelines exhibited in even the most carefully crafted packages, let alone those made by the average "indie" developer. I have committed heaps of errors, misunderstood a lot of statements, and missed crucial pieces of information while working on packages
Which one is it, "maintainers on different packages" or "I have"? "Many people say"?
A minimal PKGBUILD isn't very long or complicated, so what is an example of a "sort of violation of the AUR guidelines" and how can "the most carefully crafted packages" be full of errors?
Likewise: you've written some LLM skills that "will" do stuff because LLMs are "quite powerful, and are able to follow strict requirements quite well"
Have you actually tested those skills on some sample pools, both real-life as well as curated tests and checked the results for false positives and negatives to assess the success to error rate and can you share that data?
https://wiki.archlinux.org/title/AUR_su … submission
Make sure the package you want to upload is useful. Will anyone else want to use this package? Is it extremely specialized? If more than a few people would find this package useful, it is appropriate for submission.
Can your skills guarantee that? How does the LLM based upon a completely new upstream assess whether it's "useful"? For humans that's just a "I guess I believe gut feeling" - did the AI tell ever you "I don't think that's gonna be of any use"?
What does the AI think about compilers for https://en.wikipedia.org/wiki/Esoteric_ … g_language ? Or https://aur.archlinux.org/packages?O=0&K=fortune ?
'cause objectively none of that is even remotely useful.
Check the AUR if the package already exists. If it is currently maintained, changes can be submitted in a comment for the maintainer's attention. If it is unmaintained or the maintainer is unresponsive, the package can be adopted and updated as required. Do not create duplicate packages.
and how do they deal with this situation?
Does the LLM autonomously post comments to existing AUR packages?
Does it just try to orphan and adopt the package? Based on what conditions?
r/n we're at "I have written some markdown text I believe to maybe do useful things if fed to an LLM" - what do you reasonably expect as feedback?
"I bet it doesn't even work" ¯\_(ツ)_/¯
Offline
Wow, that is an incredibly detailed and well thought-out response! I appreciate the effort you have put into properly criticizing all my points greatly! Let me respond briefly point by point...
What makes you think people™ to lazy to read up the PKGBUILD/AUR basics should™ be maintaining AUR packages itfp?
It's not about people who are lazy, but people who don't memorize the full wiki. Mistakes I'm referring to are: not declaring git submodules as separate sources, not using SPDX license names or not installing the license, installing the icon in the wrong directory, skipping checksums that shouldn't be skipped, not using git tags appropriately, naming packages incorrectly, etc. These are just the mistakes that I've encountered over the last 8 months of contributing personally. If some tool was available to act as a final pass before submission, a lot of these mistakes could have been prevented.
The AI might help them w/ the "how" but the "that" still has to be provided by them.
I agree with you completely.
So the preference would be to have a tool that AI generates PKGBUILDs and the justification is that your AI slop is simply better™ than the other AI slop? Because trust me bro?
It seems as though I didn't express my idea correctly - I don't think that it's a good idea to generate a package with an AI agent from scratch, but "linting" an existing one with it doesn't sound outlandish if it can provide some extra insight that human error could have missed. Even if it's not 100% accurate, it would still make the person re-check the wiki for confirmation on that particular topic (or at least that's what I would use it for).
Tools that end up "autonomously rewriting" drivel, that vaguely looks like it could be a PKGBUILD, to end up w/ a formally correct, but actually completely unvettted PKGBUILD of questionable value run a huge risk to overload the AUR maintanance.
Yes, my question in that case would be: is it better to look through drivel, or formally correct but unvetted PKGBUILDs? Because one of the two will still be pushed from time to time - that's inevitable. Though, pondering on that question myself, I would likely say that drivel is better, cause then at least it would be easier to identify improper packages and prevent them from doing damage.
Fwwi, slightly off topic and on style: people might have concluded that you're posting AI generated content because it's rather elaborate but very shy on substance.
Hmmm, I suppose that might be the problem. People do tell me that I speak like a politician. I appreciate you pointing that out!
Which one is it, "maintainers on different packages" or "I have"?
Initially I was the one making mistakes, and then I noticed others make those same exact mistakes, so my contributions mostly involved fixing up other people's edits. Because of the changing nature of Open Source software, this became tedious after a while.
A minimal PKGBUILD isn't very long or complicated, so what is an example of a "sort of violation of the AUR guidelines" and how can "the most carefully crafted packages" be full of errors?
The mistake examples that I mentioned above are ones I've spotted in a lot of PKGBUILDs I personally use, some of which had a significant amount of votes if I recall correctly, because I always examine the PKGBUILD before installing.
Have you actually tested those skills on some sample pools, both real-life as well as curated tests and checked the results for false positives and negatives to assess the success to error rate and can you share that data?
Yes, I've tested it on all of the packages I've maintained or contributed to, and also some published ones, ensuring to include some that were clearly wrong, and some that were perfectly sound. Overall, some models like nemotron-3-super or MiMo V2 Flash proposed incorrect changes quite often, while the MiniMax M2.5 fixed every single error I had previously fixed myself in older versions of my packages, which was quite impressive. I have not done statistical analysis of the results though, cause the objective of the skills was once again to call attention to specific areas, rather than fix every problem with a PKGBUILD.
Can your skills guarantee that? How does the LLM based upon a completely new upstream assess whether it's "useful"? For humans that's just a "I guess I believe gut feeling" - did the AI tell ever you "I don't think that's gonna be of any use"?
No, that's exactly what a human is needed for, these skills cannot replace one.
r/n we're at "I have written some markdown text I believe to maybe do useful things if fed to an LLM" - what do you reasonably expect as feedback?
That is a very valid criticism. I apologize if I came off as arrogant. The idea of an AI linter excited me personaly quite a bit, especially after witnessing it in action firsthand, so I shared it too hastily, without actually thinking through a proper post. Hopefully I have cleared up most of the misunderstandings this has caused.
Once again, thank you for dedicating the time to write up such a constructive response.
Online