Question about DPDK performance on bare-metal VS virtual
Martinx - ジェームズ
thiagocmartinsc at gmail.com
Mon Apr 11 06:56:09 UTC 2016
On 11 April 2016 at 03:36, Martinx - ジェームズ <thiagocmartinsc at gmail.com>
> Lets get a server, with 2 x 1G NIC embedded, and 2 x 10G NIC igbxe
> (compatible with DPDK + PMD).
> I have a Proprietary L2 Bridge DPDK Application (unfortunately, it is
> still CentOS-based) that, on bare-metal, or by using PCI Passthrough with
> KVM, I can make it top the hardware speed, it can bridge @ 19.XG (full
> speed, IXIA traffic generators are attached to it), virtually no packet
> Now, I'm planning to move this "Proprietary L2 Bridge DPDK Application"
> to a purely virtual environment using only VirtIO devices (i.e., without
> any kind of PCI Passthrough).
> However, at the same server host (2x1G + 2x10G NICs), I'll be using
> Ubuntu Xenial with OpenvSwitch + DPDK (already running, BTW).
> So, the idea will be something like this:
> * Setup 1 - PCI Pass to the KVM guest
> 10G NIC 1 <-> L2 DPDK App <-> 10G NIC 2 = 19.XG full-duplex
> * Setup 2 - OVS + DPDK to the KVM guest
> 10G NIC 1 <-> OVS+DPKD <-> VirtIO <-> L2 DPDK App <-> VirtIO <-> 10G NIC
> 2 = XXG?
> My question is:
> With the "Setup 2" perfect tuned, do you guys think that I'll be able to
> hit about 15G? Maybe even more close to "bare-metal 19G"?
> I am trying many things here, I am unable to see it pass 2.8G, this is
> just in one direction, if I start traffic on both directions, it drops to
> This looks like a very complex setup and I'm still learning about how to
> put all the moving parts together... I really appreciate any tip!
> * Usefull links:
So far, my OVS+DPK and posterior KVM setup looks like this:
ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev
ovs-vsctl add-port br0 dpdk0 -- set Interface dpdk0 type=dpdk
ovs-vsctl add-port br0 vhost-user1 -- set Interface vhost-user1
ovs-vsctl add-br br1 -- set bridge br1 datapath_type=netdev
ovs-vsctl add-port br1 dpdk1 -- set Interface dpdk1 type=dpdk
ovs-vsctl add-port br1 vhost-user2 -- set Interface vhost-user2
qemu-system-x86_64 -enable-kvm -m 6144 -smp 10 -vnc 0.0.0.0:0 \
-net user,hostfwd=tcp::10021-:22 -net nic \
-net user -net nic \
-chardev socket,id=char0,path=/var/run/openvswitch/vhost-user1 \
-netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
-device virtio-net-pci,netdev=mynet1,mac=52:54:00:02:d9:01 \
-chardev socket,id=char1,path=/var/run/openvswitch/vhost-user2 \
-netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \
-device virtio-net-pci,netdev=mynet2,mac=52:54:00:02:d9:02 \
-numa node,memdev=mem -mem-prealloc \
Very likely that I am missing many, many things on this setup... After
all, it is much slower than on bare-metal...
BTW, is it possible to translate those qemu options, into a Libvirt XML?
Very annoying to deal with those commands...
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the ubuntu-server