You are not logged in.

#1 2016-02-08 20:20:02

brando56894
Member
From: NYC
Registered: 2008-08-03
Posts: 681

Better network performance between host and VM

I have a bunch of usenet downloading apps (nzbget, sickrage, couchpotato) setup in an Arch KVM. The image format is RAW, cache set to none and iomode set to native and it lives on a 128 GB Samsung Evo in my NAS (Intel 8 core Avoton @ 2.4 GHz, 32 GB ECC, 2x10Gbps ethernet, 11 TB Mirrored ZFS Pool [3 VDEVs] with a 120 GB SLOG SSD attached to an Intel M1015 HBA). Read and write performance is good within the NAS (~100 MB/sec) and network performance is good also, pretty much maxing out a 1Gbps ethernet connection via NFS, but I want to see if I can squeeze more performance out of this baby by skipping the physical network which is limited to 1Gbps. I'm using the virtio drivers for both the disk and network drivers.

Since the KVM and the host both live on the same machine would it be more efficient to use some form of internal networking between the NAS (NZBget writes to two different datasets shared out to the VM via NFS4), while still using my currently setup bridge to allow network wide access to the VMS? I know my pool can do about 300 MB/sec of random writes and I've seen it spike up to 900 MB/sec for sequential writes. Eventually I'd like to upgrade my switch to 10G and my desktop to 10G but that won't be for a while since those are pretty expensive.

Offline

Board footer

Powered by FluxBB