You are not logged in.
Hi everyone!
Over the last year or so, I have been contributing packages to the AUR and have found it difficult to follow every single recommendation that is stated in the extensive Wikis perfectly as a beginner. I have therefore attempted to rely on external utilities such as NotebookLM to help me gain a better understanding of the requirements, and verify my packages against them with more certainty. And although they have served me well as "linters", I would prefer not having to copy and paste my files into a chat interface on every change, and I also want to be able to share these helpers with the rest of the community, in the hopes that they may make other beginners' lives easier and improve package quality for us all.
So, recently, I have been playing with local AI agents through the use of tools like OpenCode, and have noticed that when integrated with MCP servers and Skills, they become quite powerful, and are able to follow strict requirements quite well. It was then that I thought of mocking up Skills specifically designed to aid with the creation of AUR packages, and have carefully crafted them, feeding them all the necessary documentation and verifying that everything that is stated the SKILL.md files follows the guidelines to a tee.
I am therefore looking for feedback from the community, to see what you think about this idea, and whether you believe it's something that could truly come in handy for the creation and betterment of AUR packages all around, or whether it's something that is likely to do more harm than good. Here is a link to the GitHub repository with these skills. You can find the installation and usage instructions there.
Additionally, I would like to preemptively address concerns which I can envision being brought up:
1. This will result in more AI slop:
The main purpose of these skills is to reduce exactly that. New users are very likely to rely on AIs for the verification of their PKGBUILDs, which are notoriously innacurate, even if you try feeding them the right info. We cannot prevent this, but with these skills, they can give their AI agents a higher chance at following the guidelines correctly, and hallucinating less. And although it may result in more users actually attempting to push packages to the AUR, those that are well-versed enough in technology to run an AI agent with these skills are likely to take it more seriously and therefore not be as problematic.
2. It will make it easier to mask malware:
Yes. It will be easier to create PKGBUILDs with malware that look innocent on the surface, and follow the guidelines perfectly. But, this is textbook "appeal to consequences", and has been said for basically every technological advancement known to man.
3. What if the guidelines change?:
Since the repository is public, I encourage pull requests that modify existing skills with any changes people may see fit. Paradoxically, this would likely result in more people actually following the most recent changes to the guidelines, as instead of having to re-read all the wiki pages, they will simply have to update the skills, and their AI agents will do the reading for them.
4. This will result in fewer people reading the actual Wiki pages:
People who don't read the guidelines manually and just use the skills directly probably wouldn't have paid much attention to the guidelines anyways, or at least not enough to keep more of them in mind when working on their package than an AI agent would. So, hopefully this results in higher quality packages being pushed to the AUR overall.
I appreciate both positive and negative feedback on this matter, and hope that it will be an interesting discussion for all. Thank you for reading.
Offline
I'm not going to bother mentioning the quality/correctness issues with LLMs because that's been discussed to death.
If you're not willing and able to read lots of text and understand it, Arch is not the distro for you. The correct solution for such people is to direct them to a distro more suited to their preferences and skill level. One of the main appeals of Arch is that you learn about computers and GNU/Linux whilst using it. Reading wiki articles is a huge part of this. If you (try to) automate this away, why are you using Arch?
I'm also sceptical of the inevitability your wording implies, like we have no choice but to give in to slop. If people refuse to RTFM that's their problem, it's not the Arch community's responsibility to cater to them.
"Don't comment bad code - rewrite it." - The Elements of Programming Style (1978), Brian W. Kernighan & P. J. Plauger, p. 144.
Offline
I'm not going to bother mentioning the quality/correctness issues with LLMs because that's been discussed to death.
If you're not willing and able to read lots of text and understand it, Arch is not the distro for you. The correct solution for such people is to direct them to a distro more suited to their preferences and skill level. One of the main appeals of Arch is that you learn about computers and GNU/Linux whilst using it. Reading wiki articles is a huge part of this. If you (try to) automate this away, why are you using Arch?
I'm also sceptical of the inevitability your wording implies, like we have no choice but to give in to slop. If people refuse to RTFM that's their problem, it's not the Arch community's responsibility to cater to them.
You do definitely raise very valid concerns, and I agree with your point that nothing can replace actually reading a manual. The problem is that, in practice it is very common for people to not RTFM, or at least not read it well enough to avoid making very common and simple mistakes.
In my experience working with maintainers on different packages, it's more often than not that you find some sort of violation of the AUR guidelines exhibited in even the most carefully crafted packages, let alone those made by the average "indie" developer. I have committed heaps of errors, misunderstood a lot of statements, and missed crucial pieces of information while working on packages, simply due to the sheer amount of recommendations that need to be considered simultaneously to create the "perfect" package. Oftentimes we as humans forget things or are simply too lazy to check - that's a flaw as uncorrectable as LLM's quality/correctness issues.
On the flipside, with a toolset like the one I'm proposing, AI models can be sort of "fine-tuned" to follow guidelines more closely and carefully, which doesn't mean replacing the knowledge one must obtain to achieve results of a high standard, but rather serving as an intermediary validator that verifies whether all said knowledge has been applied correctly. These skills are not intended for creating a full package from scratch, but rather as a scaffolding or auditing tool, aimed at more experienced programmers, who are serious and knowledgeable enough to go through the effort of actually setting up an agent with these skills. I believe that target demographic is less likely to ignore the Wiki completely and rely solely on AI, and they would likely benefit from having a "linting" tool at their disposal, in the form of an AI agent that double-checks their work for them. But again, there's also always the possibility of abusing the shiny toy once it's in your hands.
And on the topic of "giving in to slop": ever since ChatGPT's inception, I have been very skeptical of its use, and tried to avoid LLMs whenever I could. But, being a 4th year CS student, I cannot name a single classmate who does not abundantly rely on AI for a considerable chunk of their workload. It is simply the reality of where the world is going, as AIs get better and better, and these upcoming generations of programmers will be contributing more and more to Open Source projects, and making packages. Of course, what you said is true - it is their problem if they make slop, but at the end of the day, those that will be suffering are the end users. Therefore, my hope is to, if we can't force people to think on their own, make their agents at least think a little better, and who knows if someone will actually be able to extract tangible value from it, as I have in the past.
I apologize for the wall of text, I just have a lot of thoughts on this topic and think that it's a very important discussion to have.
Offline
let me correct your problem by turn your issue upside-down: you found an issue - ok; you asked ai about it - already not ok anymore; you xame up with an ai craft you another ai tool to bloat crap because're too lazy rtfm - super not ok, bro
and the last reply reads like an ai response - stoped reading after the first paragraph
as 2^8 noted: here you is the problem - not the wiki
if you lack the skill to proper contribute to the aur you should rather not
otherwise when you encounter an issue with whats written in the wiki come here and asked comprehently: "i have an ussue with X, here's what i tried and here's my result"
also: if you found issues with other packages: open an issue and the repo and let the maintainer know about so they can fix it - just babbling about "yea, i find it difficult and it seems others struggle, too" doesn't help anybody
if you want a challenge: LFS 13 just released - i'll give it a shot over the weekend
Offline
I hoped this was an attempt to use AI to help humans write better packages, but it looks like the focus is on automating the creation of packages .
For clarity :
Are the agents used as tools to check PKGBUILDS and give recommendations for issues ?
Do these agents run local or through 3rd party services ?
Disliking systemd intensely, but not satisfied with alternatives so focusing on taming systemd.
clean chroot building not flexible enough ?
Try clean chroot manager by graysky
Offline
let me correct your problem by turn your issue upside-down: you found an issue - ok; you asked ai about it - already not ok anymore; you xame up with an ai craft you another ai tool to bloat crap because're too lazy rtfm - super not ok, bro
and the last reply reads like an ai response - stoped reading after the first paragraphas 2^8 noted: here you is the problem - not the wiki
if you lack the skill to proper contribute to the aur you should rather nototherwise when you encounter an issue with whats written in the wiki come here and asked comprehently: "i have an ussue with X, here's what i tried and here's my result"
also: if you found issues with other packages: open an issue and the repo and let the maintainer know about so they can fix it - just babbling about "yea, i find it difficult and it seems others struggle, too" doesn't help anybody
if you want a challenge: LFS 13 just released - i'll give it a shot over the weekend
I appreciate your feedback. It may indeed be that I simply have my priorities backwards, though I believe that you may have also slightly misinterpreted my statements. I do not mean to insinuate that the wiki is wrong in any way, I am actually trying to do the exact opposite, as I feel like it's an incredibly useful resource that people may be misusing by trying to feed it into their AI agents. These skills attempt to feed that information to them in a more structured and "appropriate" manner.
I am also not saying that I would be the target user of this tool - the reason I am sharing it with the world is in the hopes of reducing the amount of errors I have to fix whenever I work with others, as so far there have been a lot of friction points I've encountered that could have been avoided had people paid more attention to the Wiki or perhaps used a tool like the one I'm proposing, hence why I'm asking for feedback on it. Though, you are right to suggest that I am part of the problem, as I have relied on this type of tools in the past. But, at the same time, they have helped me create packages which I hope hold up to a high standard, and have managed to provide a lot of value for their users.
Lastly, it saddens me that you think that my reply is AI-generated, as I would value the opinion of someone clearly as knowledgeable as you greatly on what I said there in my regular writing style. Seems like formality and soft-spokenness are attributed to laziness/ineptitude nowadays...
Offline
I hoped this was an attempt to use AI to help humans write better packages, but it looks like the focus is on automating the creation of packages .
For clarity :
Are the agents used as tools to check PKGBUILDS and give recommendations for issues ?
Do these agents run local or through 3rd party services ?
You are right to think that it is an attempt to help humans write better packages, as these skills are focused primarily on verifying the PKGBUILD structure and the adherence to guidelines, more so than giving instructions on how to make a package from scratch. I have simply not restricted them to a single functionality as to test the extent of the capabilities that can be reached, but I am open to rewriting them to only support pure auditing.
Now, to clarify your questions, when dispatching the aur-guides skill on an existing PKGBUILD, it is explicitly told to check for correct formatting, and references the relevant websites for the AI agent to automatically access (if it has that ability), ensuring that as many of the guidelines are met as possible. These skills can be integrated into any AI agents, both local and external, through a variety of tools, including but not limited to Claude Code, OpenCode and Cursor.
Offline
both replies read like AI - reporting as potential spam - maybe some mod might spent the effort to further check this
Offline
both replies read like AI - reporting as potential spam - maybe some mod might spent the effort to further check this
Sure does. Noted.
Last edited by ewaller (Today 00:16:00)
Nothing is too wonderful to be true, if it be consistent with the laws of nature -- Michael Faraday
The shortest way to ruin a country is to give power to demagogues.— Dionysius of Halicarnassus
---
How to Ask Questions the Smart Way
Offline
both replies read like AI - reporting as potential spam - maybe some mod might spent the effort to further check this
Sure does. Noted.
Guys, this is just how I speak. I am simply accustomed to expressing myself in an academic manner. I do not feel the need to dumb myself down, just because this sort of language is now being associated with AI writing. You can run my responses through whatever AI checking tool you want and it will tell you that it's not generated. Don't judge something by "how it reads", get quantifiable proof of it first.
And besides, you are deviating from the topic at hand. Let's discuss what this forum post was actually drafted for, instead of jumping to conclusions and throwing out false accusations on a completely unrelated manner.
If you want me to speak less formally, or write shorter responses, I can do so, just let me know. But I'm frankly disappointed by your attitude to something that I was quite interested in simply having a normal conversation about.
Offline
I also think that the posts read like LLM output. It's a vibe that's hard to pin down on any one trait. Maybe you've spent so much time interacting with LLMs that you've started to pick up on their style subconsciously. The most LLM-esque trait of your posts, to me, is that most of them start with "You're absolutely right about everything and you're such a genius" followed by disagreeing with everything the other person said.
Also, I tend to ignore reassurances that LLM output is checked or that, for whatever science-y sounding reason, the output will be high-quality, because usually when I see these assurances the output ends up being the same as any other LLM output anyway.
"Don't comment bad code - rewrite it." - The Elements of Programming Style (1978), Brian W. Kernighan & P. J. Plauger, p. 144.
Offline