You are not logged in.
If I use smbclient and do 'get somefile', I get a speed up to 100MB/s (Gigabit LAN is in use). If I read from cifs mount point , I get 10-12MB/s only. I have tried both 'cp' and 'dd' (last one with bs=8M and of=/dev/null). I have tried both directly mounting and via autofs with the same result. autofs string looks like this:
media -fstype=cifs,iocharset=utf8,noperm,noatime,rw,user=uuuuuu,password=ppppppp ://nas/media
Where to dig in? What is that big difference between smbclient and cifs-mounting? Are there another ways to mount samba share and get all traffic LAN is able to transmit?
"I exist" is the best myth I know..
Offline
Well, I must to add writing to cifs mount point ('cp' or 'dd') is as fast as 'put' with smbclient (also up to 100MB/s). Only reading is infected.
"I exist" is the best myth I know..
Offline
OK, friends, waiting for your answer I have found not absolutely perfect, but very good way to rise speeds. I have
- added 'directio' to mount options (and this is a main trick), and
- added to /etc/modprobe.d/ a file with 'options cifs CIFSMaxBufSize=130048' string.
Now (with dd) I have 98MB/s reading from samba and 75MB/s writing to samba (with real file operations on samba size and /dev/zero and /dev/null on client size). It is slightly smaller than with smbclient (110MB/s and 88MB/s), but, I hope, acceptable.
Nevertheless I'll be still happy to hear about that last secret resolving trick!
"I exist" is the best myth I know..
Offline
Heh.. Nowbody uses shares in the Arch World...
Now I'm on selecting an appropriate two-panels file manager to work with cifs. On the way, rough throughputs in relative values are listed below (with fastest smbclient beeing equal to 1.0):
1.0 smbclient
0.8 dd
0.7 mucommander, worker
0.5 cp, qtFM
0.3 mc
0.1 KDE
To tell the truth, I'm completely frustrated.. I don't understand such terrible difference
How do we all still alive with such throughput??
Last edited by student975 (2011-03-28 05:03:29)
"I exist" is the best myth I know..
Offline
Hello student975,
I guess maybe people aren't jumping in here because they are not really seeing the problem. Your getting 100MB/s? Wow! Thats about the theoretical limit of a gigabit lan isn't it? I'm not sure how you are doing that, I have never got close to transferring across a lan at near the limit. Hell, I don't get those speeds on my internal hard drive!
I'm not sure why smbclient is faster than mounting a cifs share. Are you connecting to ntfs shares with ntfs-3g?
Offline
I guess maybe people aren't jumping in here because they are not really seeing the problem. Your getting 100MB/s? Wow!
Yes, smbclient read at 110MB/s, and dd "read' at 98MB/s. Last one - via mount.cifs.
The main question is: why file managers are such different? And, say, especially for you - why mucommander (java application) is ~1.5 faster rather qtFM is? Not saying about mc and, especially, about KDE..
I'm not sure why smbclient is faster than mounting a cifs share.
Probably smbclient and mount.cifs use different API.
Are you connecting to ntfs shares with ntfs-3g?
I'm connecting from AL workstation (i5 760, 4GB DDR3, ...) to self-made NAS (also AL, Celeron E3200, 2GB DDR3, and so on) via NetGear SG105E gigabit switch. Both computers' motherboards (both Gygabyte) have 1Gb LAN ports. iperf shows 950Mbit/s in one direction, and 740+740Mbit/s in duplex.
"I exist" is the best myth I know..
Offline
I have the same problem. I think part of the problem is that a lot of people running Linux must have not-so-fast hardware
My RAID-0 SSDs can read at ~400MB/s and my RAID-0 regular 500GB hard drives can read at ~170MB/s, so the bottleneck is my 1Gbit NIC when copying between two machines.
Anyway, it's unfortunate that this type of copying isn't more optimized in Linux. If I have some time, maybe I'll write a nice GUI that does fast copies. Otherwise, smbclient seems to do the job.
Offline
Anyway, it's unfortunate that this type of copying isn't more optimized in Linux. If I have some time, maybe I'll write a nice GUI that does fast copies. Otherwise, smbclient seems to do the job.
Do you mean experiments with different file managers as I have descibed above? Have you own such measurements?
Last edited by student975 (2011-04-16 07:33:06)
"I exist" is the best myth I know..
Offline
I haven't tried all the file managers, but I have used Nautilius, mounting with cifs, and smbclient, and only smbclient has a decent speed.
Nautilus max is 60MB/s, and mounting cifs with the default options gives me less than 10MB/s. On the other hand, smbclient gives me around 110MB/s, which is near the max for a 1 Gbit NIC.
Offline
Well, probably related, Just vote for that bug, dolphin is a pig for mounted shares:
https://bugs.kde.org/show_bug.cgi?id=255306
Help me to improve ssh-rdp !
Retroarch User? Try my koko-aio shader !
Offline
I'm having the exact same issue as described by student975. Unfortunately I don't have a solution to offer but thought I should say you are not alone .
Your two suggestions (directio and CIFSMaxBufSize) were very helpful, improving my speed by about 3 or 4 times. Unfortunately it's still half the speed of smbclient! I'm a Gentoo user on kernel 2.6.37. Approx speeds (I'm on a 100Mb network through homeplug adapters hence the slower speeds):
* smbclient 7.5 MB/s
* cifs mount cp (directio and CIFSMaxBufSize): 3.5 MB/s
* cifs mount cp (directio): 2.5 MB/s
* cifs mount cp (with default options): <1MB/s
If anyone has a solution to make cifs mounts copy as fast as smbclient please share!
Not using KDE, so not related to the above for me.
Offline
I tried directio in the past, while it make a huge difference in speed, it has a drawback.
i have a mounted cifs share from win1 to lin1 with directio (mountpoint /mnt/cifs1)
the mounted cifs share is shared from lin1 via sshfs to lin2 (say (mountpoint /mnt/cifs1_sshfs1)
when i access /mnt/cifs_1_sshfs1 from lin2 and try to copy a large file, it is truncated without any error message.
without directio, it works flawlessly.
Help me to improve ssh-rdp !
Retroarch User? Try my koko-aio shader !
Offline
Hey, twl just posted this in another thread, might be related to this problem.
Offline
Hey, twl just posted this in another thread, might be related to this problem.
I have this maximum (110 MiB/s) without that trick, but have measured with Linux smbclient only (haven't windows installation anywhere around). At any case - thanks for the trick.
"I exist" is the best myth I know..
Offline
Hey, twl just posted this in another thread, might be related to this problem.
No luck, cifs totally ignores any directive in smb.conf
Help me to improve ssh-rdp !
Retroarch User? Try my koko-aio shader !
Offline
I guess cifs driver doesn't know abour smb.conf at all - that trick is for server side.
"I exist" is the best myth I know..
Offline
It is an old thread, *i know*
but if someone jumps here with problems, he has to know that on linux >=3.2 specifiyng an rsize at mount time now gave lower transfer speeds, and modprobing cifs with CIFSMaxBufSize parameter isn't that relevant anymore.
http://permalink.gmane.org/gmane.linux.kernel.cifs/4221
Help me to improve ssh-rdp !
Retroarch User? Try my koko-aio shader !
Offline
@kokoko3k
For your valuable information the thread can not be "old" Thanks!
"I exist" is the best myth I know..
Offline