What is the Aruba BP for user subnets in a clustered setup? Both for ESXi setups as well as with hardware. What is the recommended configuration?
Since the gateways must be placed on the device nodes instead of the folder, how does Aruba recommend configuring the gateways in a cluster? With AOS6 you could simply put the .1 on both your master and master failover since only one would be active at a time. With AOS8 this does not work as the clients are loadbalanced. It seems VRRP would not work either as that is a master/backup protocol. I haven't found anything relevant in the user guides but perhaps I'm overlooking the crucial information.
I am looking to perhaps move the gateways off of the controllers and onto our routers, but that creates a lot more complexity as we have our cluster node members in two physically separated data centers, L2 connected but with two sets of nexus switches doing routing. So in that case, we would need to run an HSRP instance for every wireless subnet. Additionally, I have had issues with that where DHCP is not working correctly within our VMCs. I am seeing a bunch of ARP requests for the gateway and the clients, even after receiving an IP (but it taking over a minute to do so) they are not able to ping the gateway. I have TAC case open about that. DHCP works fine outside of the ESXi environement. And the security settings for the VLAN All trunk is set correctly.
So how are you all doing this? Is there documentation on the best practice?
I think what you need to look at doing is creating Layer 3 clusters instead of Layer 2 unless you are able to do VXLAN accross data centers.
To make a Layer 3 cluster you can use the same vlan numbers, but use different Layer 3 subnets in each datacenter.
When a user is Anchored to a Controller in Data Center A but the AP they are attached to is Anchored to a Controller Data Center B the environment will dynamically create tunnels to pass traffic. This happens also when the user roams. They user session will always be on its Anchor Controller.
Now the down side to doing this is in a failure. If Data Center A controller fails and the users fail over to Data Center B for user anchor controller they will be disconnected do to waiting on getting a new IP address from the subnet in that data center.
In my environment we have 2 data centers and choose to do 2 clusters. 1 cluster in the DC A and 1 cluster in DC B. Normally all user are runninng in DC A till a failure.
So how are you doing the gateways for your cluster A?
Do you have different gateways on each node member of that cluster?
Cluster A Member A:
interface vlan 10
ip add 10.0.10.1/24
interface vlan 11
ip add 10.0.11.1/24
Cluster A Member B:
interface vlan 12
ip add 10.0.12.1/24
interface vlan 13
ip add 10.0.13.1/24
We're not doing VXLAN anywhere right now so that's kind of out of the question for the time being.
all gateways are on the upstream layer 3 switch the controllers are directly connected to. We opt for physical controller, but same difference for VM up stream layer 3 switch or router in your case.
In my mind since I designed and created the environment why am I going to have the controller do a ton of work when I have a deciated device designed to do routing connected to them.
At Aruba, we believe that the most dynamic customer experiences happen at the Edge. Our mission is to deliver innovative solutions that harness data at the Edge to drive powerful business outcomes.
© Copyright 2020 Hewlett Packard Enterprise Development LPAll Rights Reserved.