I40evf/ixgbe-vf-Effectively the same as above (DMAs packets between the NIC and the VM) but allows the NIC to be shared across No CPU cycles are required for moving packets. I40e in PCI passthrough-Dedicates the server's physical NIC to the VM and transfers packet data between the NIC and the VM
The following vNICs are recommended in order of optimum performance. M4 server with the Intel® Xeon® CPU E5-2690v4 processors running at 2.6GHz.ĪSAv supports ESXi version 6.0, 6.5, and 6.7.
The host CPU must be a server class x86-based Intel or AMD CPU with virtualization extension.įor example, ASAv performance test labs use as minimum the following: Cisco Unified Computing System™ (Cisco UCS®) C series Make sure to conform to the specifications below to ensure optimal performance. Review the following guidelines and limitations before you deploy the ASAv. Each virtual appliance you create requiresĪ minimum resource allocation-memory, number of CPUs, and disk space-on the host machine. The specific hardware used for ASAv deploymentsĬan vary, depending on the number of instances deployed and usage requirements. You can create and deploy multiple instances of the ASAv on an ESXi server. You can also redeploy a new ASAv VM withĪSAv on VMware Guidelines and Limitations To 9.13(1)+ from an earlier version without increasing the memory of your ASAv VM. If your current ASAv runs with less than 2GB of memory, you cannot upgrade The minimum memory requirement for the ASAv is 2GB. You can deploy the ASAv on any server class x86 CPU device that is capable of running VMware ESXi. Upgrade the Compatibility Level for Virtual Machines.Enable SR-IOV on the Host Physical Adapter.Multiple RX Queues for Receive Side Scaling (RSS).Increasing Performance on ESXi Configurations.