You are not logged in.
Hello,
so I´ve been using qemu with kvm on my system and nearly everything works fine. But there is one anomaly that drives me crazy: If I copy a file from within the guest OS to a public writeable samba share on the host the transfer speed is just about 3 MiB/s!
- Using iperf I found the network speed from the guest OS to the host to be roughly 4 Gbit/s
- If I copy a file from the public samba share of the host to the guest OS (from within the guest OS) the transfer speed is about 80 MiB/s
- If I copy a file via samba from the host OS to a public writeable samba share on the same host the transfer speed is about 140 MiB/s (!)
So where is the bottleneck? Why is the transfer speed via samba from the guest OS to the host just 3 MiB/s?
Thanks in advance
Last edited by Joe S (2018-02-05 17:18:39)
Offline
https://wiki.archlinux.org/index.php/QEMU#Networking
What type of network protocol is qemu using for your vm ?
Disliking systemd intensely, but not satisfied with alternatives so focusing on taming systemd.
clean chroot building not flexible enough ?
Try clean chroot manager by graysky
Offline
https://wiki.archlinux.org/index.php/QEMU#Networking
What type of network protocol is qemu using for your vm ?
Hi, what do you mean by that? I tested the NIC in the VM with iperf (TCP) and I got nearly 4 Gbit/s. Shouldn't samba (which uses TCP) hence be quite fast?
Offline
Keep in mind that the VM usually has a virtual network card, not a physical one.
Sending data to a virtual network card is akin to copying data from 1 loc in memory to another loc in that same memory.
When copying stuff through smb , other devices then memory (like hdd/ssd, usb drives etc) also influence the speed.
apart from that, qemu provides several networking methods for networking .
The default one is called user-mode networking .
It's simple but slow.
TAP networking is more complicated but a lot faster.
Then the type of OS running on the VM and the driver it uses also influences the speed.
Are you using user-mode networking, tap networking or some other method ?
If you're not certain, post the commandline you use to start the VM.
(if using virt-manager, check it's wiki page how to get that info).
Disliking systemd intensely, but not satisfied with alternatives so focusing on taming systemd.
clean chroot building not flexible enough ?
Try clean chroot manager by graysky
Offline
Keep in mind that the VM usually has a virtual network card, not a physical one.
Sending data to a virtual network card is akin to copying data from 1 loc in memory to another loc in that same memory.
When copying stuff through smb , other devices then memory (like hdd/ssd, usb drives etc) also influence the speed.
Hi,
thanks for trying to help me, but I think that argument of yours is not true in this case:
I started the iperf server on my host and ran 5 tests from inside the VM (Arch Linux) to the iperf-server. The iperf-server prints the same results (~4 Gbit/s) on stdout as does the client inside the VM.
So, to debunk that myth that the iperf-server 'just copies data from 1 loc in memory to another' I copied data using ssh.
I created 1GiB of random data inside the VM (dd if=/dev/urandom of=testdata bs=1M count=1024) and copied that file with scp to the host with ~150 MiB/s (which is nearly 90% of the maximum bandwidth of my disk).
I tried optimizing the Samba server as suggested in the wiki but to no result: still just ~ 3MiB/s from guest OS to host ... (same disk I used in my scp test).
Greetz
Offline
For clarity :
In my opinion the number iperf gave is not a good reflection of the transfer speed.
You do appear to have a problem with smb transfers though.
The scp copy test does give an idea what speed the network transfer can reach .
Are you using user-mode networking, tap networking or some other method ?
You haven't answered that question .
Also :
What are your Host and guest OS, which filesystems are used ?
Disliking systemd intensely, but not satisfied with alternatives so focusing on taming systemd.
clean chroot building not flexible enough ?
Try clean chroot manager by graysky
Offline
The culprit is the lightweight file manager pcmanfm (and nautilus) which uses gvfs-smb to mount cifs/samba shares and it seems that this gvfs-smb has some hardcoded tiny, little buffers which causes bad transfer speeds.
So when I mounted the samba/cifs share via console
mount -t cifs -o guest //10.0.2.2/test /mnt/test
I could copy files from within the guest OS (arch linux) to the host (arch linux) with nearly 160 MiB/s (which is the limit of my hdd).
Offline
Glad you solved it.
gvfs is used by many applications, have you tried reporting the issue to them ?
Disliking systemd intensely, but not satisfied with alternatives so focusing on taming systemd.
clean chroot building not flexible enough ?
Try clean chroot manager by graysky
Offline