You are not logged in.
Pages: 1
Topic closed
I takes about 11 seconds to mount a directory on another computer over my local network. Once that folder is mounted NFS4 works fine, and files can be transferred very rapidly. The computer(s) I am mounting are already listed in my "/etc/hosts" file, and the firewall configuration(s) looks fine.
This problem showed up about a month ago, so I am it might have something to do with one of the updates to my system. Mounting NFS shares over my local network is slow regardless if its done by autofs or manually via the command line.
I tried the verbose switch when mounting manually, but it produced no error messages.
Last edited by marko2010 (2013-08-16 07:33:31)
Offline
If it's any comfort to you, I have observed this as well. It only seems to be true when mounted the first time from a clean boot. A subsequent umount and mount is more or less instantaneous.
CPU-optimized Linux-ck packages @ Repo-ck • AUR packages • Zsh and other configs
Online
At last someone with the same problem! Started just about a month ago for me too. Not mounting it from fstab but manually (mount command) instead goes quicker. So I suspect systemd.
Offline
I don't see a difference actually.
CPU-optimized Linux-ck packages @ Repo-ck • AUR packages • Zsh and other configs
Online
I have the same issue. Both machines are running Arch and are up to date. I hadn't bothered to look into it before, but now I've checked dmesg* after the first mount of a share:
[ 34.145396] NFS: Registering the id_resolver key type
[ 34.145411] Key type id_resolver registered
[ 34.145412] Key type id_legacy registered
[ 49.156761] RPC: AUTH_GSS upcall timed out.
Please check user daemon is running.
So, I rebooted and started the rpc-gssd service. Now the share mounts right away.
Granted, I have no idea what the 'RPC GSS-API' is, and the server component - rpc-svcgssd - is not running on my nfs server. But the client is just a testbed, so... I don't really care.
* Actually, first I checked the journal. But in place of the last two lines I quoted above (ie. the useful bit), it contained
Aug 15 16:20:29 catawompus kernel: [68B blob data]
Last edited by alphaniner (2013-08-15 20:26:53)
But whether the Constitution really be one thing, or another, this much is certain - that it has either authorized such a government as we have had, or has been powerless to prevent it. In either case, it is unfit to exist.
-Lysander Spooner
Offline
@alphaniner - You should add this to the nfs wiki.
CPU-optimized Linux-ck packages @ Repo-ck • AUR packages • Zsh and other configs
Online
I have the same issue. Both machines are running Arch and are up to date. I hadn't bothered to look into it before, but now I've checked dmesg* after the first mount of a share:
[ 34.145396] NFS: Registering the id_resolver key type [ 34.145411] Key type id_resolver registered [ 34.145412] Key type id_legacy registered [ 49.156761] RPC: AUTH_GSS upcall timed out. Please check user daemon is running.
So, I rebooted and started the rpc-gssd service. Now the share mounts right away.
Granted, I have no idea what the 'RPC GSS-API' is, and the server component - rpc-svcgssd - is not running on my nfs server. But the client is just a testbed, so... I don't really care.
* Actually, first I checked the journal. But in place of the last two lines I quoted above (ie. the useful bit), it contained
Aug 15 16:20:29 catawompus kernel: [68B blob data]
Enabling nfs-gssd in systemctl. I mean rpc-nssd.
Last edited by nomorewindows (2013-08-15 20:28:35)
I may have to CONSOLE you about your usage of ridiculously easy graphical interfaces...
Look ma, no mouse.
Offline
@graysky
I don't think that'd be appropriate considering my aforementioned ignorance.
Last edited by alphaniner (2013-08-15 20:29:52)
But whether the Constitution really be one thing, or another, this much is certain - that it has either authorized such a government as we have had, or has been powerless to prevent it. In either case, it is unfit to exist.
-Lysander Spooner
Offline
Enabling rpc-gssd.service cut boot from 28s to 11 s again! Thx!
Offline
@alphaniner
checking dmesg after mounting was a good idea. When I tried it I saw the exact same error ("RPC: AUTH_GSS upcall timed out.")
So then I ran on the nfs client computer:
sudo systemctl start rpc-gssd.service
And the problem went away. Now NFS4 shares are basically mounting instantaneously -- even with autofs.
I'm going to do a little more testing on this for the rest of the day, and if there are no hickups I was planning on marking this thread "SOLVED"
Last edited by marko2010 (2013-08-15 21:37:56)
Offline
So then I ran on the nfs client computer
Ah! Thanks for this. I tried to do it on my NFS server, and though "Hey! This doens't work for me!" Then I got down to your post here and I thought to myself, "I'm an idiot!"
(I'm actually posting here mostly so that the thread will show up when I use the 'posted' topics, so sorry for the useless noise)
Offline
Guess the next question is does this belong on the wiki or flyspray? In other words, is this behavior expected?
CPU-optimized Linux-ck packages @ Repo-ck • AUR packages • Zsh and other configs
Online
::Public Service Announcement::
I think a sanity check is in order. Rpc-gssd is described as a client service, and has a corresponding server component rpc-svcgssd. As I said I have no idea what these are or what they do. But generally, I avoid running a client service unless it can access its counterpart.
To me, this reeks of workaround. I expect there's a mount option, export option, or nfs client/server option that will prevent the "AUTH_GSS upcall" from being issued in the first place.
For all I know, running rpc-gssd without proper configuration and/or without a server for it to contact could pose a security risk.
That is all.
::End Public Service Announcement::
But whether the Constitution really be one thing, or another, this much is certain - that it has either authorized such a government as we have had, or has been powerless to prevent it. In either case, it is unfit to exist.
-Lysander Spooner
Offline
According to this thread:
http://forums.fedoraforum.org/showthread.php?t=249619
rpc-svcgssd at the server side is for nfs kerberos authentication. I do not use it, never will. So it should be optional at the client side, one thinks. Well, it is, but works really bad without it. Still suspect systemd ...
Offline
Open a flyspray to get it in the hands of people with more knowledge.
CPU-optimized Linux-ck packages @ Repo-ck • AUR packages • Zsh and other configs
Online
The sec= option {see nfs(5) and exports(5)} seemed promising, but specifying sec=sys didn't change anything for me. I tried in the mount alone and in both mount and export. Granted, sec=sys is the default, but I figured it couldn't hurt.
Maybe it's worth someone else looking into though.
Edit: Well, here's a source which recommends starting rpc-gssd as a solution. I still don't like it, though. It also seems to suggest that the issue shouldn't occur when sec=sys is used.
Last edited by alphaniner (2013-08-16 18:43:29)
But whether the Constitution really be one thing, or another, this much is certain - that it has either authorized such a government as we have had, or has been powerless to prevent it. In either case, it is unfit to exist.
-Lysander Spooner
Offline
The unfathomable systemd is involved too. She moves in mysterious ways ...
Offline
I still don't see any evidence of that.
But whether the Constitution really be one thing, or another, this much is certain - that it has either authorized such a government as we have had, or has been powerless to prevent it. In either case, it is unfit to exist.
-Lysander Spooner
Offline
Open a flyspray to get it in the hands of people with more knowledge.
CPU-optimized Linux-ck packages @ Repo-ck • AUR packages • Zsh and other configs
Online
The unfathomable systemd is involved too. She moves in mysterious ways ...
I have a Debian sid system that I use as an NFS server. It uses sysvinit, not systemd. I also have another machine that runs a similar sid system, again with sysvinit, that can connect to the server. I had the connection delay problem until I ran rpc.gssd on the client. Neither machine uses systemd.
Last edited by 2ManyDogs (2013-08-16 19:51:28)
Offline
Sorry for the necro-bump, but I think it's silly to start a different thread for the exact same issue, given the useful info in this thread, and what seems like a workaroud as the solution. I'd rather not enable another systemd service if it's avoidable.
I too have been observing this issue as stated in the OP, since the timeframe of the thread discussion. As I only connect to my NFS share infrequently, I didn't pursue investigation.
It looks like systemd is absolved based on the last post. I could not find a related flyspray, and am not sure if this is an upstream issue, or specific to arch?
I guess I'll have to test with a different client, but wondering if anyone pursued this issue further. Is there an existing bug filed either upstream or flyspray?
Last edited by tekstryder (2013-12-11 19:51:00)
Offline
Some folks (including myself) seem to have problems with running rpc-gssd as a workaround. I've managed to avoid both the mount delay and running an unrequired service (rpc.gssd) by blacklisting rpcsec_gss_krb5 (see https://bugzilla.redhat.com/show_bug.cgi?id=1001934, the third suggested workaround. Seems to work fine on i686.
Offline
I've managed to avoid both the mount delay and running an unrequired service (rpc.gssd) by blacklisting rpcsec_gss_krb5 (see https://bugzilla.redhat.com/show_bug.cgi?id=1001934, the third suggested workaround.
This seems like a much more sane solution than running a client w/o a matching service. I can confirm that initial tests seem to indicate that this is working fine on my x86_64 machines as well.
Offline
I had the same issue on Debian testing, without systemd, and disable rpcsec_gss_krb5 module worked for me:
echo blacklist rpcsec_gss_krb5 > /etc/modprobe.d/dist-blacklist.conf
reboot
thanks to Painless (in this thread).
Offline
Please don't necrobump old (and solved) threads: https://wiki.archlinux.org/index.php/Fo … Bumping.27
Closing
Offline
Pages: 1
Topic closed