You are not logged in.
I need to be able to build against a specific version of glibc.
I initially looked at Docker, but that was extremely limited and when I was asking for help on that, was told Docker is probably not the best tool and to look at crosstools-ng.
I then looked at crosstools-ng and was having zero luck getting everything to compile with it, because despite being told I was doing it correctly, it was not locating libraries it should. They suggested things like buildroot.
I then looked at buildroot, and got it setup only to be told it hasn't supported including build tools for over a decade.
I'm at a complete and total loss of where to go from here.
My goal is to be able to setup a dev environment against any version of glibc I want. I'd prefer to not need to chroot at all, but I'm not opposed to chrooting into the environment I create.
Offline
I'm not sure I'm clear on the actual goals. Does the source you are building from depend *only* on (g)libc? If it has other dependencies, are you going to need dated versions of those as well? Are the other libc's you want to build against just older versions of the x86_64 glibc, or are you aiming for other architectures?
I'm a bit confused by the use of "dev environment" as the libc version should not be particularly relevant there (aside from feature-test macros you might include in the code). You only need to specify the actual libc to use when building the code for distribution ... which I'd not really call a "dev environment" but rather a build server.
But generally speaking you'd just need to include the relevant library/ies for the linker line and specify -nodefaultlibs.
Last edited by Trilby (2024-02-01 00:45:39)
"UNIX is simple and coherent" - Dennis Ritchie; "GNU's Not Unix" - Richard Stallman
Offline
I'm not sure I'm clear on the actual goals. Does the source you are building from depend *only* on (g)libc? If it has other dependencies, are you going to need dated versions of those as well? Are the other libc's you want to build against just older versions of the x86_64 glibc, or are you aiming for other architectures?
I'm a bit confused by the use of "dev environment" as the libc version should not be particularly relevant there (aside from feature-test macros you might include in the code). You only need to specify the actual libc to use when building the code for distribution ... which I'd not really call a "dev environment" but rather a build server.
But generally speaking you'd just need to include the relevant library/ies for the linker line and specify -nodefaultlibs.
The goal is to get a build environment setup for building various things. The main requirement is the glibc version at the moment.
They have other dependencies, but I am willing to build those as needed in most cases.
A build server is probably the better phrasing, assuming 'server' is used loosely.
Everything is x86_64 thankfully so I don't need to worry about anything on that front.
For reference, some of the pre-reqs I need to build within the environment are
curl
openssl
libjpeg-turbo
elfutils
gpgme
pkg-config
libgcrypt
zlib
libffi
libpng
pcre2
libgpg-error
glib
libassuan
I was trying to do this with crosstools-ng and a staging directory but I kept running into linker errors when I was trying to build gpgme where it couldn't find libassuan when linking, but should have, and the most likley scenario was contamination from the host, so moved to trying to get a dedicated environment, be it a chroot or something else.
Offline
You can put libraries and includes wherever you want and then
you'd just need to include the relevant library/ies for the linker line and specify -nodefaultlibs.
Offline
You can put libraries and includes wherever you want and then
Trilby wrote:you'd just need to include the relevant library/ies for the linker line and specify -nodefaultlibs.
I'm building from various release tarballs and/or git sources.
I'm not sure how I'd be able to figure out what to specify in that case.
This isn't for things I'm creating but rather building for a different machine.
Offline
I'm building from various release tarballs and/or git sources.
I'm not sure how I'd be able to figure out what to specify in that case.
This really doesn't follow at all. What you "specify" is based on the machine(s) you're building for, not the upstream source.
"UNIX is simple and coherent" - Dennis Ritchie; "GNU's Not Unix" - Richard Stallman
Offline
Ketrel wrote:I'm building from various release tarballs and/or git sources.
I'm not sure how I'd be able to figure out what to specify in that case.This really doesn't follow at all. What you "specify" is based on the machine(s) you're building for, not the upstream source.
I think you misunderstood my statement there.
I don't know how to modify the upstream source to do what's was stated.
"you'd just need to include the relevant library/ies for the linker line and specify -nodefaultlibs."
I do not know how to do that by a longshot.
Offline
This an obvious https://en.wikipedia.org/wiki/XY_problem
Please explain, in laymans terms, what you're trying to achive as end goal.
Ie. eg. "I want to build packages for Fedora, SuSE and various Debian versions on my archlinux host", not! any specific steps you believe to be necessary for that goal.
Offline
I do not know how to do that by a longshot.
You don't modify the upstream source (in the worst case you might need a simple patch to a Makefile, but even this is unlikely as everything you'd configure should be through environment variables). That's the point. But aside from that, the fact that these points are apparently completely unfamiliar to you begs the question of how you expect to complete the task you seem to be taking on ... and further points to this being an X-Y questions.
Last edited by Trilby (2024-02-02 14:24:46)
"UNIX is simple and coherent" - Dennis Ritchie; "GNU's Not Unix" - Richard Stallman
Offline
Ketrel wrote:I do not know how to do that by a longshot.
You don't modify the upstream source (in the worst case you might need a simple patch to a Makefile, but even this is unlikely as everything you'd configure should be through environment variables). That's the point. But aside from that, the fact that these points are apparently completely unfamiliar to you begs the question of how you expect to complete the task you seem to be taking on ... and further points to this being an X-Y questions.
I think you're still misunderstanding.
Here, let me give you my ultimate goal:
Immediate: I want to compile the following 3 things under glibc 2.37 or lower
1. appimagetool
2. linuxdeploy
3. retroarch
Then I want to package retroarch as an appimage using #1 and #2
Long term: I want to have a build environment where I can continue to compile additional things (either to later package as appimages or not) under glibc 2.37 or lower
So what I'm saying is I have NO idea how to write or modify the incredibly large amount of makefiles these 3 things (AND their pre-reqs) require to be able to figure out what libraries I need to explicitly pass to LDFLAGS. It sounds like an incredibly daunting undertaking which seemingly defeats the entire purpose of a build environment to begin with.
This isn't an X-Y question, I'm asking for what I want.
(And honestly I'm not to thrilled people telling me it's such when I don't elaborate on every detail as to WHY I want it. I have to spend more time defending that I'm asking the right question than actually asking the question.)
Last edited by Ketrel (2024-02-02 20:37:59)
Offline
Nobody gives two fucks about *why* you want to do anything and it *is* an XY-problem and I suggest to read the wikipedia article on what that actually means, otherwise I'll give you the gist in very blunt terms.
On topic: What you're trying to do will realistically not work.
=> Create a VM and install some distro/version that's using the desired version of glibc and create the appimage there.
You could also install an older OS on a parallel partition and chroot there, but that has no advantages.
You might or not have noticed that at retroarch links a shit-ton of other libraries that currently have all been compiled and linked against newer versions of glibc with (assuming you're planning to fish glibc from the ALA and not build it yourself) newer versions of binutils and gcc - chances that you can inject an older version of glibc into that are virtually zero. It *might* work (provided you build that version of glibc on the current stack), but most likely won't. So in reality "compile [retroarch] under glibc 2.37 or lower" means to compile it and all its dependencies under glibc 2.37
If you want to know, ldd all ELF objects in retroarch from the repo (/usr/bin and *.so) and then "readelf -sW" each of those libraries and grep for "GLIBC_2.38", if that shows up once, it's game over.
(You can also just grep the binaries but that might cause false positives)
Why do you think you want to or need to build appimagetool and linuxdeploy against the same glibc as the target (retroarch)?
Either way, the binary version in the AUR is older than glibc 2.37 and you could just use that - and actually it doesn't matter anyway.
Have you ever come across https://docs.appimage.org/packaging-guide/index.html ?
Offline
I think I'm really explaining something poorly here.
Goal 1: Build environment based on glibc 2.37 or lower
Goal 2: Appimagetool + Linux deploy built under said environment
Goal 3: Retroarch built under said environment and packaged as an appimage
Reason for these goals is I'm making an appimage targeting a device that uses glibc 2.37.
What I've done that sort of worked is a Docker image of a Debian version that uses an older version of glibc.
This worked and got me the build environment.
So in reality "compile [retroarch] under glibc 2.37 or lower" means to compile it and all its dependencies under glibc 2.37
YES, I know this. This is what I want to do.
Why do you think you want to or need to build appimagetool and linuxdeploy against the same glibc as the target (retroarch)?
Because I need to build the appimage under that glibc or older otherwise it won't run on the target because the target uses 2.37
So I'm back to my same question I originally asked. It's not an x-y problem because I'm literally asking HOW to do "Goal 1". The rest are two things I intend to do WITH what I produce from goal 1. But "goal 1" is my goal.
If you still think I'm trying to ask something else, please tell me what it is, because I'm not understanding then. I believe what I'm asking for is how to set up a build environment. I'm not asking how to do it with any specific tool, just how to get the environment.
Offline
YES, I know this. This is what I want to do.
https://wiki.archlinux.org/title/Arch_build_system
Because I need to build the appimage under that glibc or older otherwise it won't run on the target because the target uses 2.37
That's nonsense (the specific binary composition of the appimage creating tool/s are not relevant to run the appimage, the tool needs to build the build environment, not the target environment) and again: the binary in the AUR is perfectly old enough.
Have you ever come across https://docs.appimage.org/packaging-guide/index.html ?
An xy-problem frequently arises when people have an idea and then come up with a stupid plan that won't work to implement their idea and then google and ask together the bits and pieces they figure will help them with their plan that will ultimately still run them against a wall because their plan was stupid and never gonna work.
I don't know what your idea is but for now I'm not convinced that rebuilding the entire distro against a dated versions of glibc isn't a stupid plan.
As a matter of fact it is so fringe that nobody here remotely guessed that, it sounded more like you wanted to build one specific binary against a specific copy of glibc.
Offline
YES, I know this. This is what I want to do.
I also don't see how the Arch Build System applies here because it would still use Glibc 2.38, no?
Because I need to build the appimage under that glibc or older otherwise it won't run on the target because the target uses 2.37
That's nonsense (the specific binary composition of the appimage creating tool/s are not relevant to run the appimage, the tool needs to build the build environment, not the target environment) and again: the binary in the AUR is perfectly old enough.
seth wrote:Have you ever come across https://docs.appimage.org/packaging-guide/index.html ?
Yes, I've read the appimage docs
I've also specifically read this section that explicitly talks about the error I got when I just built everything on my system.
https://docs.appimage.org/reference/bes … ase-system
I've also been looking at the linked https://github.com/AppImage/AppImageKit … pGenerator for the same reason.
I really think this is an issue with my phrasing. I'm not sure what I'm saying wrong though.
Last edited by Ketrel (2024-02-03 01:15:28)
Offline
Refers precisely to what seth refers you and to what you should be doing. Use a VM install an old debian stable or similarly dated base system, and build everything in that system.
Most of the other cases will likely include you having to get your hands dirtier to adjust the things your building to a different cross toolchain
Last edited by V1del (2024-02-03 01:29:23)
Offline
Ketrel wrote:Refers precisely to what seth refers you and to what you should be doing. Use a VM install an old debian stable or similarly dated base system, and build everything in that system.
I'm really confused now, the only time he mentioned a VM, he said
"On topic: What you're trying to do will realistically not work."
But that's what I DID do with Docker and Debian.
Offline
Then what's your remaining question? If you did that and it did work, why are we talking about versions of glibc that you'd want to build otherwise/adjusting linker paths, when this wouldn't inherently be necessary assuming an old enough base system?
If that didn't work you need to post the actual errors you got and potentially seek support of the older base system you're using.
Last edited by V1del (2024-02-03 01:33:52)
Offline
Then what's your remaining question? If you did that and it did work, why are we talking about versions of glibc that you'd want to build otherwise/adjusting linker paths, when this wouldn't inherently be necessary assuming an old enough base system?
If that didn't work you need to post the actual errors you got and potentially seek support of the older base system you're using.
The docker method DID work.
However, multiple people told me there's better ways to do this than docker, with toolchains/chroots/etc.
I've not been able to figure out any working method using them, so I'm asking what ARE the actual ways to do it and how to do it using them.
The way I expected this to go was I asked, someone would recommend one tool/toolset or another, I'd look at said tool(s), and ask any further questions that arose when I tried.
Instead, I'm having to defend my question itself as well as the whole basis (glibc versions) that's explicitly mentioned in the appimage documentation with the same error I got when I didn't address it, so I'm a bit snippy and I'm sorry for that.
Last edited by Ketrel (2024-02-03 01:44:02)
Offline
I'm really confused now, the only time he mentioned a VM, he said
"On topic: What you're trying to do will realistically not work."
And then I told you to create a VM w/ the dated system and build your stuff there - the "=>" signifies an induction.
"What you want to do won't work, thus do this instead."
I will point out again that you most likely want to solve some problem, then came up with an idiotic plan for that and are now trying to inquire how to pursue your idiotic plan.
If you want a more informed comment, explain the problem you want to solve. Otherwise this is going absolutely nowhere.
Offline
FWIW more generally speaking, if docker and debian worked and worked properly, what exactly was "unsatisfactory" and who are "people told me this wasn't a good solution" and what were their alternatives? apparently crosstools-ng but that didn't work, but we don't know what didn't work there so if you wanted to fix crosstools you'd post some info on that.
From personal anecdote, the last time I had a bigger cross-compiling project was for my bachelor thesis where we literally rebuilt almost the entirety of debian ARM to have a vetted platform for a securely encrypted raspberry pi featuring a plausible deniability boot mode. The build process for that used an arm debian docker with the relevant qemu cross runtimes. That worked fine, I don't know whether it was the "best" approach but it did what we needed.
And to maybe get more practical on the Y of the XY: Piecing together from what you posted so far, part of your goal is to generate an appimage of retroarch based on some old distribution baseline -- correct? (if you have other plans for the build environments, we can look at those seperately, we need to know the immediate end goal you have right now) and what's the end goal for that appimage? Do you actually want to build some arcade cabinet or a game console or a fancy living room system that for some reason relies on an old debian baseline?
Offline
FWIW more generally speaking, if docker and debian worked and worked properly, what exactly was "unsatisfactory" and who are "people told me this wasn't a good solution" and what were their alternatives? apparently crosstools-ng but that didn't work, but we don't know what didn't work there so if you wanted to fix crosstools you'd post some info on that.
I wasn't seeking help on my issues with crosstools unless the answer to my original question was that crosstools WAS the best method. Otherwise, I was seeking the best method.
And to maybe get more practical on the Y of the XY: Piecing together from what you posted so far, part of your goal is to generate an appimage of retroarch based on some old distribution baseline -- correct? (if you have other plans for the build environments, we can look at those seperately, we need to know the immediate end goal you have right now) and what's the end goal for that appimage? Do you actually want to build some arcade cabinet or a game console or a fancy living room system that for some reason relies on an old debian baseline?
I keep saying it's not an XY thing because I truly, honestly, and seriously am asking for the best way to get the build environment. The appimage is the first thing I'm building, because I did that in the docker container and it worked. The build environment IS my goal. The appimage is a test of said environment. I have a functional one already from Docker.
Offline
You've been asking how to build against a specific version of glibc… but the best way to build an appimage on a dated software stack is to use a dated software stack.
I'd use a VM but you can also just chroot into the system.
You can also start w/ a dated SW stack and then update isolated packages to the last release on that stack (using the ABS)
What you can NOT do (in general) is to inject some random glibc and deploy that and retroarch build against it along some libraries that were compiled against a newer version of glibc - glibc is (usually) backwards but never forward compatible.
random containers might provide you with random versions of random libraries, but you'll have to reach into the host for stuff that's not part of that container (and not required by it, but by your build target) and then things will go south.
You want to build your package against a coherent software stack, ie. an installation of an old system in a chroot or VM.
You can probably even skip the "build" part and create the appimage from the existing package (unless you're seeking to build a newer version of retroarch on/against the older software stack)
Offline
The chroot option is why I was looking at buildroot.
(I also saw that you could use that to generate an image you could run with Qemu to further isolate things which is possibly the route I'm considering, but I need to build a toolchain that I can transfer into it first)
But these answers are the things I'm looking for here. The appimage is not the goal. I made that with the docker setup and am already using it. My honest to goodness goal is to get a build environment for any future things I want to build (and possibly to also package as appimages) for said target.
Right now what it looks like I should be trying to do (from my understanding of this process) to get exactly the environment I want (vs using an older supported Debian image like I did within Docker) is
1. Build a toolchain with crosstools (and then use that to build a second one intended for use in build environment, canadian cross I believe they call this)
2. Build a buildroot image with buildroot, and include the 2nd toolchain inside
3. Turn that into a VM I run with Qemu, and try from there (this method would require me to build a LOT or pre-req libraries, but should give me exactly the env I need)
Offline
Wouldn't it be easier to just install a compiler on the target system and build there?
"UNIX is simple and coherent" - Dennis Ritchie; "GNU's Not Unix" - Richard Stallman
Offline
Wouldn't it be easier to just install a compiler on the target system and build there?
It would be if the target system wasn't setup to be immutable other than the home directory. It would be orders of magnitude easier.
I'm going to be trying with the crosstools-ng -> builldroot -> qemu route.
I'll probably end up with some sort of question from there, but I'm re-reading everything on doing this process.
Last edited by Ketrel (2024-02-04 01:34:19)
Offline