regarding overhead... NIOC is good...but general rule... is we need free the esx kernel.. so if we use more and more features...it will be eventually an overhead...and if the physical hardware does the same... thing...then that is better..that is why.. now vmware has VAAI..SRIOV..CPU/Memory offloading...if some one has no blades..and then if they have 10gig then NIOC is only option...
http://pibytes.wordpress.com/2013/01/12/multi-nic-vmotion-speed-and-performance-in-vsphere-5-x/
Management and vMotion are on the same VLAN (but different IP addresses) - it is recommended to use VLAN in a security perspective..and in production this is how we do. they can share the same nics.. no issues..but it should be tagged. 2 pnics with 2gbe is good - recently i did a benchmarking of vmotion speed in the blade you can see more info in my blog - http://pibytes.wordpress.com/2013/01/12/multi-nic-vmotion-speed-and-performance-in-vsphere-5-x/
A few questions then:
- I understand you recommend keeping management / vMotion in a separate pair of flexNICs so as not to give additional overhead managing within VMware.
it is not mandatory....we need to use separate nics...its all based on the client environment.But it should be in separate VLAN...of couse VST tagging consumes some CPU cycles..that we can ignore....across the globe..this is ignored. in you case you can use 2 pnics with 2gbe for combining the mgmt/vmotion.
- Is there a benefit of splitting management and vMotion in different VLANs but keeping them in the same uplink (so tagging the VLANs)
security wise and best practices in production we do use separate VLAN, and we can share..the same nics..There are some use cases like...assume if you have 100 hosts.. and there may be many vmotions happening simultaneously.. in that case we use dedicated pSwitch so that the VMOtion traffic wont FLOOD the core switch. so here we use dedicated pnics..in a small environment and a well balanced and if no over subscription of cluster CPU/RAM then there will be very little VMotion happening.. so we can share same pnics and same pswitch.
- Would we be better off using two different pairs of FlexNICs, one for management and one for vMotion thereby using 3 pairs of FlexNICs in total (1 for all data VLANs, one non-tagged for management and a third non-tagged for vMotion). If so, I guess you'd give different bandwidths within VC to keep vMotion nice and fast?
as per my bench marking...which i have done..if we give..2gbe for vmotion it is too fast...and again as i mentioned above.. we can combine...or just give 2 pnics with 500 mb bandwidth for mgmt, and 2 gig for vmotion and rest for the VM traffic..* this will be a good design..that is we have isolated physically and VLAN wise also we need to isolate.
for the poor response fo the console..check the DNS resolution...and nothing to do with.. the bandwidth. Check the vcenter CPU/RAM and the Vcenter database CPU/RAM...