Wireless Access

Reply

Re: ArubaOS 8.x MM Virtual vs. Hardware

There's a lot of bad information here so let's cover the basics.

 

1. You do NOT need to enable promiscuous mode and forged transmits for ALL vSwitches on the ESX hypervisor to allow the MM to work. If promiscuous mode and forged transmits bother you, you just create a unique port group with promiscuous mode and forged transmits enabled, while the vSwitch that contains that port group has those settings disabled, and you put the VMM into that port group, and things work fine. This is a common requirement for *any* virtualized switch or router and is not unique or nasty or dangerous, and doesn't carry any more risk than anything else deployed when done properly.

 

Note as well, these requirements (promiscuous mode and forged transmits) will likely *never* change and will always be required whenever you virtualize a switching product like our controllers.  

 

2. You do not need to trunk up all VLANs to the MM, the MM only need to be routable from and to the controllers the MM is managing. So the MM in effect only needs one VLAN/network minimum. Now, you CAN trunk up multiple VLANs to the MM depending on your use case, etc. But otherwise, it only need one IP and one VLAN (or two IPs if you are doing VRRP).

 

3. While HMM and VMM offer equivalent capabilities, the VMM has a significant cost savings associated with it. To support 1000 APs with redundant MMs, you only need to purchase one MM-VA-1K license and you can deploy up to four VMMs (2xL2 with L3 between DCs). If you want redundancy with Hardware MM, you have to purchase two MM-HW-1K appliances to support 1,000 APs (or if you want to have four HMMs for 2xL2 with L3 between DCs), then you are paying 4x as much. So VMM *is* the better way to go.

 

4. LACP is never required for any Aruba product, but some people like to. For VMM, LACP is involved when your ESX hypervisor's vSwitch is connected to multiple NICs. VMWare tries to make it easy with NIC Teaming, however, virtual switches with firewalls won't just accept frames from one or the other, so you just have to specify that that vSwitch's multiple NICs be configured between the hypervisor and uplink switch with LACP (instead of NIC teaming with no config on the upstream switch). This is also common when carrying virtualized switching products across a vSwitch with multiple physical uplinks from the hypervisor, and is more efficient.

 

5. In no way should the use of L2 VRRP create any kind of 'storm' on your network. We have VRRP L2 redundancy deployed on networks with 10k APs + carrying over 100k concurrent client connections without issue. L2 VRRP is the primary mechanism for controller clustering as well. This is not a risk unless it's done incorrectly (which then could cause issues sure, if you use the same VIP ID and passphrase for multiple devices instead of a unique VRRP ID and passphrase for each paired devices). 

 

6. The VMM deployes with three virtual interfaces/network adapters. 'Network Adapter 1' maps to the OOB management interface. 'Network Adapter 2' maps to gig0/0/0, and 'Network Adapter 3' maps to gig0/0/1. You can certainly deploy on a single interface (I have multiple MMs on my lab on the same VMware hypervisor on the same vSwitches and all of them are all only using 'Network Adapter 2 mapped to gig0/0/0'. You just disable the network adapters you don't want to use, and don't assign them to any vSwitch (or put them in a null vswitch if it makes you feel better) and just leave one of the gig0/0/0-1 interfaces in whatever vSwitch/port group you want.

 

 

7. Most entities that use or decide on the Hardware MM is because a) their security policy requires physical appliance (think government, military, etc), or b) the network team doesn't work well/trust the virtual infrastructure team, or c) they just prefer hardware appliances regardless of cost.

 

 

If you are a partner, you should have access to whatever training docs you need. As a customer, I agree we could do better with some kind of 'intro/hand-holding how-to-deploy' cookbooks and it's being worked on. But this is a good place to start and there's plenty of people here that can help. However, I will add that the vast majority of your issues and concerns really should not be an issue. promiscuous mode/forged transmits are a requirement. If that is a deal breaker and you just annot do it, even when enabled only in the port group, then hardware is your ONLY option. If you cannot figure out how to run VRRP, that will be an issue regardless of hardware or virtual since we use L2 VRRP for not only MM redundancy, but also cluster redundancy among controllers. So long as your VRRP IDs and passphrases are unique, there should never be any risk of a 'storm' or interruption of service (unless you have some oddity on your network where something is intercepting or bloccking VRRP broadcasts). 

Re: ArubaOS 8.x MM Virtual vs. Hardware

Also, don't forget to provide links or attach the documents you are using to formulate your descision/process. Thanks!

Re: ArubaOS 8.x MM Virtual vs. Hardware

Last one, you can watch multiple videos here 

 

https://www.youtube.com/channel/UCFJCnuXFGfEbwEzfcgU_ERQ/search?query=AOS+8

 

Covering AOS 8 setup, config, operation, etc. This is a site informally run by Aruba SEs and partners (because sadly getting training docs on the official Aruba channel is a nightmare, as I've tried). So the guys that contribute to this channel provide excellent information on multiple topics. The link above is sorted on the 'AOS 8' search.

Frequent Contributor I

Re: ArubaOS 8.x MM Virtual vs. Hardware

Just to build a bit on what jhoward2 said...

 

We ended up going with a pair of hardware MM controllers because of option B.  More specifically, our virtualization environment at the time just wasn't up to snuff to handling the controllers.  They've been behaving just fine.

 

As for the  VMware networking stack issues, as he said they are pretty much guaranteed to occur with virtualizing any kind of advanced network appliance.  The vswitch is designed around the assumption that it has complete and correct knowledge of what MAC addresses are present on what virtual ports, which is true for the case of all virtual hardware MAC addresses, as those are assigned by the hypervisor.  Network appliances typically break this is two ways.

 

The first is forwarding traffic.  This obviously entails a ton of (what appear to the vswitch to be) made up addresses coming out of the guest, and what's more, the appliance is expecting the vswitch to forward all of those addresses back to it.  This also applys for devices that have been quiet, meaning that the appliance needs all traffic flooded to it.

 

This obviously isn't the case for the MM, though.  Here, it's more needed simply for the floating VRRP VIP.  Search through the interwebs for VMware and VRRP or CARP (which runs very similarly) and you'll find that it's a very, very long running issue, with no better solution that creating a separate port group with tweaked settings.  I've been running a pair of virtual MM in my lab environment, and once I gave them an appropriate port group, they've behaved themselves since.

Occasional Contributor II

Re: ArubaOS 8.x MM Virtual vs. Hardware

I do have to say, 8.x has been a really tough bear to deal with. Lack of documentation for fundamental things. Every single version of the software has had major problems. I upgrade to every single release to fix current problems, only to find it breaks new major things.

 

To touch on the OPs topic, I had the same issue when installing the VM MM. Took me weeks to get real answers on how to deploy it properly.

Highlighted
Guru Elite

Re: ArubaOS 8.x MM Virtual vs. Hardware

milleriann,

 

Please be specific, so that others can learn.

******************
Answers and views expressed by me on this forum are my own and not necessarily the position of Aruba Networks or Hewlett Packard Enterprise.
******************
Occasional Contributor II

Re: ArubaOS 8.x MM Virtual vs. Hardware

Here is the exact order I ended up performing to install my VM MM. This part has been working fine. I only have one MM.

 

1) Single VLAN access port to MM. This VLAN also has to go L2 to MDs.

2) Promiscuous Mode set to "accept" and Forged Transmits set to "accept" on Port Group. You can use a different Port Group for other VMs using the same VLAN so you don't affect other VMs with these settings.

3) NIC 1 and 3 disconnected in VMware. NIC 2 set for Data and Management G-0/0/0.

4) Pre-Allocate Memory - "Reserve all Guest Memory".

5) 8 vCPUs, but 8 CORES on 1 SOCKET (for my sizing). This is a manual change.

6) Boot MM with default RAM and HDD first.

7) Do the initial config of IP address etc.

8) Turn off MM.

9) Upgrade RAM/HDD as needed for your sizing. I used 32 GB RAM and 32 GB HDD on Disk 2. You have to add a NEW VMware HDD that is larger than the OLD HDD disk 2. Boot MM back up and it will auto migrate the data on the drive 2 to drive 3. Once fully booted, remove old drive 2, which you can do with the MM on because it doesn't mount. (keep disk 1 untouched).

 

If there is a better way, someone please share their info.

 

Also, each version of AOS 8.x had major thinks break like APs disconnecting, NTP breaking, TACACS breaking, Clients disconnecting. SAPD issues, VLAN leakage, memory spikes, the cluster breaking, CPSEC errors, client slowness, CoA breaking, etc.

 

The current 8.2.1.0, that is the newest as of today, seems solid on all of the above issues, but AirGroup is completely broken and my users are dead in the water with that piece, right this second.

Re: ArubaOS 8.x MM Virtual vs. Hardware

That's more or less it.

 

If there's issues with AirGroup, please open a TAC case. I can't speak to the other issues without more specifics.

 

Search Airheads
cancel
Showing results for 
Search instead for 
Did you mean: