Referring to your query
xavierwalker wrote:
We have two shared uplink sets, one per VC module:
/servlet/JiveServlet/downloadImage/2-2177444-25730/sus1.png
Inside VC, we have created two virtual connect networks per VLAN and assigned each VCnet to one of the shared uplink sets. That way, we have the system running active-active on the uplinks.
this correct from the VC side..
The servers then are configured to run in active-passive and we have configured alternate preferred network card (vNIC) to kind of load balance between the two pairs of networks.
I really wonder..why you choose... the Active/Passive mode in the ESX side. The ESX will do the loadbalancing and faiover...no need to any thing in the OS or any other means.
As per your diagram you have 2 LOM so 8 Flexnics you have.
- you created required vlans in the VC and added to the SUS sets
- now you need to decide how many vswitches.. and how many nics are needed to each vswitch
- jsut distribute the total flex nics as per you requirement and set the bandwidth in the VC,
- use the nic teaming..in the esx and put the nics in active/active - you already have VC in active/active so you need to do the same in ESX side.
you can just see the recent post how to use nics for vmotion nfs mgmt etc... -
another query of you "
In order to simplify the above VCnet configuration, we were considering just using one pair of 10Gb-allocated FlexNICs and tagging all traffic in it. And that would include the management, vmotion, etc. Instead of cutting up the 10Gb LOM bandwidth at the VC level which isn't that flexible, we would then use the network management tools within the VMware vDS."
If you are using... flex nics and flex networking... then there is no need to use NIOC and otherways to limit the network bandwidth from the hypervisor layer and this will an overhead...
that is the beauty of the HP blades and this technology...every thing will be done in the VC side...
the NIOC is suitable for normal 10 gig cards.. so the people can divide/set shares for the traffic and manage from the esx level... for the HP-VC there is not need...
in short.. just put each nics in each VC.... then create multiple network.. in the VC server profile.. that it..in my blog i just mentioned how to divide the blade traffic.. inside the LOM and also if vmotion is not going outside the baldes then you can make this vmotion to the VC internel... backplane..
let me know your exact requirement...