You are not logged in.

#1 2024-06-28 13:30:03

0x1
Member
Registered: 2022-11-27
Posts: 3

trouble mounting a second internal NVME (encrypted) from XFCE desktop

I have a new system (laptop) with two internal NVME drives;

nvme1n1 is the drive in question.
i'd do something like this to create to establish the storage drive:

sgdisk -Z  /dev/nvme1n1 -o  /dev/nvme1n1 -a 2048
sgdisk -n 1:0:+0 -t 1:8300 -c 1:"STORAGE" /dev/nvme1n1
cryptsetup --batch-mode --type luks1 --cipher aes-xts-plain64 --key-size 512 --hash sha512 --align-payload 2048 -i 5000 --use-random luksFormat /dev/nvme1n1p1
cryptsetup luksOpen --allow-discards /dev/nvme1n1p1 lvm
lvm pvcreate --dataalignment 4M /dev/mapper/lvm
lvm vgcreate vg /dev/mapper/lvm
lvm lvcreate -l 100%FREE -n storage vg
vgscan && vgchange -ay

i'm interested in mounting the encrypted drive as i would through typical process similar to an encrypted USB drive (ie: double clicking the icon on the desktop).

the encrypted drive is visible on the desktop.  after double clicking it, i get the prompt for the luks password then i get the sudo password prompt (i resume this is polkit doing its job).
after luksOpen is finished, i'm hit with a dialog "mount failed" "operation was canceled".

i have a fresh install (base+XFCE+gvfs).  my luks-lvm pen drives work as expected. no issues.
i'm stuck on how to resolve the mounting issue. i'd like to avoid opening the console every time i need to open the drive.

as an aside all things work if done via console :-/

Offline

#2 2024-06-28 16:16:57

0x1
Member
Registered: 2022-11-27
Posts: 3

Re: trouble mounting a second internal NVME (encrypted) from XFCE desktop

looks like there are some underlying issues:

Jun 28 16:05:06 arch lvm[1319]: PV /dev/dm-0 online, VG lvm is complete.
Jun 28 16:05:06 arch systemd[1]: Started /usr/bin/lvm vgchange -aay --autoactivation event lvm.
░░ Subject: A start job for unit lvm-activate-lvm.service has finished successfully
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/l … temd-devel
░░
░░ A start job for unit lvm-activate-lvm.service has finished successfully.
░░
░░ The job identifier is 1289.
Jun 28 16:05:06 arch lvm[1322]:   1 logical volume(s) in volume group "lvm" now active
Jun 28 16:05:06 arch kernel: gvfs-udisks2-vo[1181]: segfault at 20 ip 000070e465e0907e sp 00007ffd5daf3b50 error 4 in libgio-2.0.so.0.8000.3[70e465d92000+11a000] likely on CPU 12 (core 24, socket 0)
Jun 28 16:05:06 arch kernel: Code: 8b 5d f8 31 c0 c9 c3 0f 1f 44 00 00 f3 0f 1e fa 55 48 89 e5 53 48 89 fb 48 83 ec 08 67 e8 1a fb ff ff 48 85 db 74 2d 48 89 c6 <48> 8b 03 48 85 c0 74 05 48 39 30 74 0d 48 89 df ff 15 84 03 12 00
Jun 28 16:05:06 arch systemd-coredump[1328]: Process 1181 (gvfs-udisks2-vo) of user 1000 terminated abnormally with signal 11/SEGV, processing...
Jun 28 16:05:06 arch systemd[1]: Created slice Slice /system/systemd-coredump.
░░ Subject: A start job for unit system-systemd\x2dcoredump.slice has finished successfully
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/l … temd-devel
░░
░░ A start job for unit system-systemd\x2dcoredump.slice has finished successfully.
░░
░░ The job identifier is 1297.
Jun 28 16:05:06 arch systemd[1]: Started Process Core Dump (PID 1328/UID 0).
░░ Subject: A start job for unit systemd-coredump@0-1328-0.service has finished successfully
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/l … temd-devel
░░
░░ A start job for unit systemd-coredump@0-1328-0.service has finished successfully.
░░
░░ The job identifier is 1292.
Jun 28 16:05:06 arch systemd[1]: lvm-activate-lvm.service: Deactivated successfully.
░░ Subject: Unit succeeded
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/l … temd-devel
░░
░░ The unit lvm-activate-lvm.service has successfully entered the 'dead' state.
Jun 28 16:05:06 arch systemd-coredump[1329]: [?] Process 1181 (gvfs-udisks2-vo) of user 1000 dumped core.
                                             
                                             Stack trace of thread 1181:
                                             #0  0x000070e465e0907e g_task_get_task_data (libgio-2.0.so.0 + 0xa907e)
                                             #1  0x0000640d20ce9f39 n/a (gvfs-udisks2-volume-monitor + 0x13f39)
                                             #2  0x000070e46563e596 n/a (libffi.so.8 + 0x7596)
                                             #3  0x000070e46563b00e n/a (libffi.so.8 + 0x400e)
                                             #4  0x000070e46563dbd3 ffi_call (libffi.so.8 + 0x6bd3)
                                             #5  0x000070e465bcb2c4 g_cclosure_marshal_generic (libgobject-2.0.so.0 + 0x182c4)
                                             #6  0x000070e465bc464a g_closure_invoke (libgobject-2.0.so.0 + 0x1164a)
                                             #7  0x000070e465bf4ce5 n/a (libgobject-2.0.so.0 + 0x41ce5)
                                             #8  0x000070e465be55dc n/a (libgobject-2.0.so.0 + 0x325dc)
                                             #9  0x000070e465be5842 g_signal_emit_valist (libgobject-2.0.so.0 + 0x32842)
                                             #10 0x000070e465be5904 g_signal_emit (libgobject-2.0.so.0 + 0x32904)
                                             #11 0x000070e465b58b00 n/a (libudisks2.so.0 + 0x69b00)
                                             #12 0x000070e465c6feda n/a (libglib-2.0.so.0 + 0x5deda)
                                             #13 0x000070e465c6ea89 n/a (libglib-2.0.so.0 + 0x5ca89)
                                             #14 0x000070e465cd09b7 n/a (libglib-2.0.so.0 + 0xbe9b7)
                                             #15 0x000070e465c6f787 g_main_loop_run (libglib-2.0.so.0 + 0x5d787)
                                             #16 0x0000640d20cdf135 n/a (gvfs-udisks2-volume-monitor + 0x9135)
                                             #17 0x000070e465778c88 n/a (libc.so.6 + 0x25c88)
                                             #18 0x000070e465778d4c __libc_start_main (libc.so.6 + 0x25d4c)
                                             #19 0x0000640d20cdf195 n/a (gvfs-udisks2-volume-monitor + 0x9195)
                                             
                                             Stack trace of thread 1182:
                                             #0  0x000070e465866e9d syscall (libc.so.6 + 0x113e9d)
                                             #1  0x000070e465cc99e0 g_cond_wait (libglib-2.0.so.0 + 0xb79e0)
                                             #2  0x000070e465c378dc n/a (libglib-2.0.so.0 + 0x258dc)
                                             #3  0x000070e465ca3687 n/a (libglib-2.0.so.0 + 0x91687)
                                             #4  0x000070e465c9e236 n/a (libglib-2.0.so.0 + 0x8c236)
                                             #5  0x000070e4657e5ded n/a (libc.so.6 + 0x92ded)
                                             #6  0x000070e4658690dc n/a (libc.so.6 + 0x1160dc)
                                             
                                             Stack trace of thread 1239:
                                             #0  0x000070e46585b39d __poll (libc.so.6 + 0x10839d)
                                             #1  0x000070e465cd08fd n/a (libglib-2.0.so.0 + 0xbe8fd)
                                             #2  0x000070e465c6df95 g_main_context_iteration (libglib-2.0.so.0 + 0x5bf95)
                                             #3  0x000070e4645dffde n/a (libdconfsettings.so + 0x5fde)
                                             #4  0x000070e465c9e236 n/a (libglib-2.0.so.0 + 0x8c236)
                                             #5  0x000070e4657e5ded n/a (libc.so.6 + 0x92ded)
                                             #6  0x000070e4658690dc n/a (libc.so.6 + 0x1160dc)
                                             
                                             Stack trace of thread 1184:
                                             #0  0x000070e46585b39d __poll (libc.so.6 + 0x10839d)
                                             #1  0x000070e465cd08fd n/a (libglib-2.0.so.0 + 0xbe8fd)
                                             #2  0x000070e465c6f787 g_main_loop_run (libglib-2.0.so.0 + 0x5d787)
                                             #3  0x000070e465e72494 n/a (libgio-2.0.so.0 + 0x112494)
                                             #4  0x000070e465c9e236 n/a (libglib-2.0.so.0 + 0x8c236)
                                             #5  0x000070e4657e5ded n/a (libc.so.6 + 0x92ded)
                                             #6  0x000070e4658690dc n/a (libc.so.6 + 0x1160dc)
                                             
                                             Stack trace of thread 1183:
                                             #0  0x000070e46585b39d __poll (libc.so.6 + 0x10839d)
                                             #1  0x000070e465cd08fd n/a (libglib-2.0.so.0 + 0xbe8fd)
                                             #2  0x000070e465c6df95 g_main_context_iteration (libglib-2.0.so.0 + 0x5bf95)
                                             #3  0x000070e465c6dfea n/a (libglib-2.0.so.0 + 0x5bfea)
                                             #4  0x000070e465c9e236 n/a (libglib-2.0.so.0 + 0x8c236)
                                             #5  0x000070e4657e5ded n/a (libc.so.6 + 0x92ded)
                                             #6  0x000070e4658690dc n/a (libc.so.6 + 0x1160dc)
                                             ELF object binary architecture: AMD x86-64
░░ Subject: Process 1181 (gvfs-udisks2-vo) dumped core
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/l … temd-devel
░░ Documentation: man:core(5)
░░
░░ Process 1181 (gvfs-udisks2-vo) crashed and dumped core.
░░
░░ This usually indicates a programming error in the crashing program and
░░ should be reported to its vendor as a bug.
Jun 28 16:05:06 arch systemd[1]: systemd-coredump@0-1328-0.service: Deactivated successfully.
░░ Subject: Unit succeeded
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/l … temd-devel
░░
░░ The unit systemd-coredump@0-1328-0.service has successfully entered the 'dead' state.
Jun 28 16:05:06 arch systemd[1027]: gvfs-udisks2-volume-monitor.service: Main process exited, code=dumped, status=11/SEGV
░░ Subject: Unit process exited
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/l … temd-devel
░░
░░ An ExecStart= process belonging to unit UNIT has exited.
░░
░░ The process' exit code is 'dumped' and its exit status is 11.
Jun 28 16:05:06 arch systemd[1027]: gvfs-udisks2-volume-monitor.service: Failed with result 'core-dump'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/l … temd-devel
░░
░░ The unit UNIT has entered the 'failed' state with result 'core-dump'.

Offline

#3 2024-06-29 15:39:45

0x1
Member
Registered: 2022-11-27
Posts: 3

Re: trouble mounting a second internal NVME (encrypted) from XFCE desktop

i've solved my own problem.   as i started to work through the issue the solution became semi familiar.

since the storage drive is encrypted i could use /etc/crypttab:

storage UUID=xxxxx-xxxx-xxxx-xxxx-xxxxxxxx none noauto,luks

then from here i'd use /etc/fstab:

/dev/mapper/storage /run/media/user/storage       ext4         noauto,rw,relatime,user 0 0

something along these lines did get me back to a point where i can double click an unmounted LUKS drive and start the process of unlocking and mounting.
the wiki page for /etc/crypttab was the spark i needed.

Last edited by 0x1 (2024-06-29 15:40:12)

Offline

Board footer

Powered by FluxBB