I don't see any problem at all with boostrapping a virtualenv and package() and then using that env from your app, it's perfectly reasonable. You just have to undestand that this is not about developing any upstream package, but the use case here is deploying your application using pacman to presumably arch based server. In that case why wouldn't I DTFIW? So say I have a django web app with its own deps, and of course most of them are neither in arch's repos or AUR, and even then, it's much simpler to just package everything up and be done with the deployment, almost like using docker, but even more lightweight, no container, nada.
]]>It's utterly forbidden to pip install in the postinstall hook of the $pkgname.install as that doesn't even constitute a package, pacman must be able to track the files and you cannot YOLO download and compile source code during pacman -Syu or pacman -U ./path/to/foo.pkg.tar.zst
It's strongly discouraged to create virtualenvs even in the package() function (installing to "$pkgdir"), and there's really no reason the software should not work with current versions of the dependent global packages -- if it doesn't work, the package should be patched to work and those patches sent upstream, or backported if upstream already fixed it.
See e.g.
https://wiki.gentoo.org/wiki/Why_not_bu … pendencies
https://fedoraproject.org/wiki/Bundled_Libraries
https://www.debian.org/doc/debian-polic … eddedfiles
What I'm looking for 1) does this sound reasonable? and 2) are they any best practices around using a python virtualenv in building an arch package.
thank in advance for any advise
]]>