You are not logged in.
Hi,
I'm running Arch Linux and have KeepassXC installed; although I can't access a database installed on a network or cloud location on Arch Linux (such as OneDrive, Gdrive or ProtonDrive) whereas I can on Windows and Android. Is this a currently known issue, and is there a workaround that actually uses the network location rather than a local copy? The reason is so I can have my Windows, Android and Linux all using the one KeepassXC database.
Furthermore, is KeepassXC better than kwallet for Arch Linux? Which do you prefer using, and should I switch to KeepassXC from Kwallet on Linux?
Also I can think of a number of improvements for KeepassXC, for example being able to store credit/debit card information in it, and GPG keys would be good features to have that it doesn't at the moment. Are there any plans to implement these, or how do I go about suggesting these for a future release of keepassXC? I think there's a way to store SSH keys in it although I'm not familiar with how to do that, nor do I have a need to at this present moment.
Offline
You could make a script that uses wget or something like that and save the file to /tmp/keepassxc/ and then open that database in KeepassXC from there. As for the other stuff, that's already possible. You might be using the entries in an unconventional way, but there's nothing stopping you from using it to store credit/debit/PGP/SSH keys.
Offline
I'm syncing my db with syncthing, but I guess nextcloud client would also work.
I'm also storing cc info it. cc-number as user name, all other info as custom attribute.
The code is hosted on github. You can open an issue there.
Offline
You can store your database on a cloud service and mount it with rclone. These cloud services are supported:
$ rclone help backends
All rclone backends:
alias Alias for an existing remote
acd Amazon Drive
azureblob Microsoft Azure Blob Storage
b2 Backblaze B2
box Box
crypt Encrypt/Decrypt a remote
cache Cache a remote
chunker Transparently chunk/split large files
combine Combine several remotes into one
compress Compress a remote
drive Google Drive
dropbox Dropbox
fichier 1Fichier
filefabric Enterprise File Fabric
ftp FTP
gcs Google Cloud Storage (this is not Google Drive)
gphotos Google Photos
hasher Better checksums for other remotes
hdfs Hadoop distributed file system
hidrive HiDrive
http HTTP
internetarchive Internet Archive
jottacloud Jottacloud
koofr Koofr, Digi Storage and other Koofr-compatible storage providers
local Local Disk
mailru Mail.ru Cloud
mega Mega
memory In memory object storage system.
netstorage Akamai NetStorage
onedrive Microsoft OneDrive
opendrive OpenDrive
oos Oracle Cloud Infrastructure Object Storage
pcloud Pcloud
premiumizeme premiumize.me
putio Put.io
qingstor QingCloud Object Storage
s3 Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Liara, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS, Qiniu and Wasabi
seafile seafile
sftp SSH/SFTP
sharefile Citrix Sharefile
sia Sia Decentralized Cloud
smb SMB / CIFS
storj Storj Decentralized Cloud Storage
tardigrade Storj Decentralized Cloud Storage
sugarsync Sugarsync
swift OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
union Union merges the contents of several upstream fs
uptobox Uptobox
webdav WebDAV
yandex Yandex Disk
zoho Zoho
To see more info about a particular backend use:
rclone help backend <name>
$
I use a simple script that I autostart with my DE to mount:
#!/bin/bash
while true; do
# Check to see if there is an internet connection by pinging primary DNS
if ping -q -c 1 -W 1 1.1.1.1 &> /dev/null; then
# Connected, mount the drive and break the loop
rclone --vfs-cache-mode writes mount "OneDrive Private": ~/Cloud/OneDrive-Private &
break
else
# Not connected, wait for one second and check again
sleep 1
fi
done
This can probably be more elegant as a systemd user unit or something, but this does the job for me.
Also KeepassXC works like a charm as a SSH agent; when the database is open I go right in, and when the database is closed so is the SSH host.
Offline