You are not logged in.
Hi. Cross-posting from this AUR comment:
https://aur.archlinux.org/pkgbase/pytho … nt-1055692
I've been trying to compile python-pytorch-cuda12.9 for a few days now and would really appreciate some help. I was able to makepkg and install cuda-12.9, cudnn9.10-cuda12.9 and nccl-cuda12.9 But when compiling python-pytorch-cuda12.9, for the first couple times it would crash so bad that it would freeze my whole graphical session. I was able to solve that setting MAX_JOBS=10 instead of 20 on the PKGBUILD file.
But now, after several hours of build, I get the below error. I'm not really sure how to debug or fix that.
Full logs are here:
https://www.swisstransfer.com/d/b7b1b01 … 462e3b669d
```
[3717/7529] /usr/bin/ccache /opt/cuda/bin/nvcc
-forward-unknown-to-host-compiler
-ccbin=/usr/bin/g++-14
-DAT_PER_OPERATOR_HEADERS
-DFLASHATTENTION_DISABLE_ALIBI
-DFLASHATTENTION_DISABLE_SOFTCAP
-DFLASH_NAMESPACE=pytorch_flash
-DFMT_HEADER_ONLY=1
-DGFLAGS_IS_A_DLL=0
-DGLOG_USE_GFLAGS
-DGLOG_USE_GLOG_EXPORT
-DHAVE_MALLOC_USABLE_SIZE=1
-DHAVE_MMAP=1
-DHAVE_SHM_OPEN=1
-DHAVE_SHM_UNLINK=1
-DIDEEP_USE_MKL
-DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS
-DONNXIFI_ENABLE_EXT=1
-DONNX_ML=1
-DONNX_NAMESPACE=onnx_torch
-DPROTOBUF_USE_DLLS
-DTORCH_CUDA_BUILD_MAIN_LIB
-DTORCH_CUDA_USE_NVTX3
-DUNFUSE_FMA
-DUSE_C10D_GLOO
-DUSE_C10D_MPI
-DUSE_C10D_NCCL
-DUSE_CUDA
-DUSE_CUFILE
-DUSE_DISTRIBUTED
-DUSE_EXTERNAL_MZCRC
-DUSE_FLASH_ATTENTION
-DUSE_MEM_EFF_ATTENTION
-DUSE_NCCL
-DUSE_RPC
-DUSE_TENSORPIPE
-D_FILE_OFFSET_BITS=64
-Dtorch_cuda_EXPORTS
-I/home/starch/builds/aur/python-pytorch-cuda12.9/src/pytorch-cuda/build/aten/src
...
-I/home/starch/builds/aur/python-pytorch-cuda12.9/src/pytorch-cuda/torch/csrc/api/include
-isystem /home/starch/builds/aur/python-pytorch-cuda12.9/src/pytorch-cuda/build/third_party/gloo
...
-isystem /home/starch/builds/aur/python-pytorch-cuda12.9/src/pytorch-cuda/cmake/../third_party/cudnn_frontend/include
-DLIBCUDACXX_ENABLE_SIMPLIFIED_COMPLEX_OPERATIONS
-Xfatbin
-compress-all
-DONNX_NAMESPACE=onnx_torch
-gencode arch=compute_52,code=sm_52
...
-gencode arch=compute_121,code=sm_121
-Xcudafe
--diag_suppress=cc_clobber_ignored,--diag_suppress=field_without_dll_interface,--diag_suppress=base_class_has_different_dll_interface,--diag_suppress=dll_interface_conflict_none_assumed,--diag_suppress=dll_interface_conflict_dllexport_assumed,--diag_suppress=bad_friend_decl
--expt-relaxed-constexpr
--expt-extended-lambda
-Xfatbin
-compress-all
-Wno-deprecated-gpu-targets
--expt-extended-lambda
-DCUB_WRAPPED_NAMESPACE=at_cuda_detail
-DCUDA_HAS_FP16=1
-D__CUDA_NO_HALF_OPERATORS__
-D__CUDA_NO_HALF_CONVERSIONS__
-D__CUDA_NO_HALF2_OPERATORS__
-D__CUDA_NO_BFLOAT16_CONVERSIONS__
-DC10_NODEPRECATED
-O3
-DNDEBUG
-std=c++17
-Xcompiler=-fPIC
-march=x86-64
-DMKL_HAS_SBGEMM
-DMKL_HAS_SHGEMM
-DTORCH_USE_LIBUV
-DCAFFE2_USE_GLOO
-Xcompiler
-Wall
-Wextra
-Wdeprecated
-Wunused
-Wno-unused-parameter
-Wno-missing-field-initializers
-Wno-array-bounds
-Wno-unknown-pragmas
-Wno-strict-overflow
-Wno-strict-aliasing
-Wredundant-move
-Wno-maybe-uninitialized
-MD
-MT caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/TensorModeKernel.cu.o
-MF caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/TensorModeKernel.cu.o.d
-x cu
-c /home/starch/builds/aur/python-pytorch-cuda12.9/src/pytorch-cuda/aten/src/ATen/native/cuda/TensorModeKernel.cu
-o caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/TensorModeKernel.cu.o
ninja: build stopped: subcommand failed.
ERROR Backend subprocess exited when trying to invoke build_wheel
==> ERROR: A failure occurred in build().
Aborting...
```
Offline