Hi all, I am doing a Open Guest WLAN POC with 2 clusters of “branch” IAP´s and a central controller to terminate IPSEC Tunnels.All is working as expected. In this type of solution i need to have in the trunk between the IAP´s and the switch the vlan associated to the Guest Wlan so that all IAP´s can send traffic to the VC IAP and from it through the IPSEC tunnel to the central controller.The first problem (probably only a problem to me but the normal behaviour of the solution ) is that this puts the macs from the guests in the MAC table of my switchs and so anyone (as this is an open WLAN) could try to flood the switch with MAC´s and do a “cam table overflow” attack that could disrupt the normal behaviour of the switch. The second problem (a lesser one) is that with the “Centralized L2 Mode” all the solution behaves like a big L2 Domain, I am seeing the MAC´s from the clients in IAP cluster #1 appearing in the switch that supports the IAP cluster #2 (via the port connected to the VC). So even if no one attacks the solution, if I have like 500 IAP clusters with 64 clients in each I will have in all my switch lots of MAC´s from clients.For the first problem I think the default WLAN limit of 64 clients should help mitigating this type of attack (but I guess I could just associate with a mac, disassociate , associate with another mac and so on as the MAC table timeout is large enough for me to do this many times…note to self, test this…) Other way is to have port-security (Cisco Switch) with a high number of possible mac´s, but even this would be hard to implement because of problem nº2 (in theory I could put a max number of secure mac´s as big as the ip range I have in dhcp server).For the second problem I think only a different type of solution , probably use “Local Mode” or “Distributed L3 Mode” but this means I need to have routing at the central controller , something I am trying to avoid…So after lots of “bla bla” the big question for the Guru´s is:How should I create a Guest solution that doesn’t have this type of problems, or how to mitigate them?
If you can get the sceanrio to work with Distributed L3, I would use that instead. This provides a much more robust and scalable solution and while the routing is enabled, you really only have one northbound route out of the controller in the datacenter to worry about...
With Centralized L2, at scale, the worry would be broadcasts and multicasts and having to traverse the WAN with that regard. if it's a small deployment, there shouldn't be any issues...but scale that out to 100s to 1000s and there will be.
I am not worried about broadcast/Multicast as i am dropping them at the Wlan level (broadcast filter all).
I am more worried about the mac address exhaustion in my switches (worried or a little paranoid :) )
L3 distributed would not take care of problem number #1 as the client macs would appear in the switch.
I am also worried about having the L3 (in this case the Default GW and a DHCP server) of a open WLAN at the IAP when the same IAP has also a corporate WLAN as this could be a vector for a exploit (probably being paranoid again...)..
After some reading i now know that it would be a little tricky to do the "attack" to overload the Mac table, but with aircrack-ng i could probably generate a lot off auth/deauth packets with different Macs in the Guest Wlan and do it (have to test it in a lab...) .
So i am going to test the scenario with Central L2 where i have all the IAP´s creating a GRE tunnel to the controller (so i guess there is no need for the Vlan associated with the Guest Wlan to be on the switch connected to the IAPs) and only worry about the dedicated switch on the central controller side.
I'm running 6 sites with controller and iAP based guest access at the edge, and GRE tunneling the guests back to a redundant pair of controllers at my Internet connection where all clients drop off.
The guest enter the GRE tunnel at the (i)AP and don't get out until they get to the datacenter and then only reside on the DMZ switch long enough to get NATted to the Outside of my firewall. 150-250 guests devices (only about 50 authenticated, the rest just pounding their heads against the captive portals).
No guest MAC exists anywhere in the internal network except to traverse the Aruba gear in-tunnel on their way out. I'm filtering broadcasts etc. and haven't had any issues in a few years.
You should be AOK as far as I can see.
The only "problem" that i see is that the central controller will have to be more powerfull as we need a tunnel per IAP and not per IAP Cluster, but its way more secure so i guess its the way to go :)
GRE is just encapsulation, not encryption, so settings an GRE tunnel per client should be less load intensive then passing back traffic ot the VC to send in a VPN tunnel.
I have deployment on 7210 with over 600 APs forming per tunnel GRE with no problems.
If you are sending this over the internet, and have security concerns i would probably not use GRE. If you don't care the traffic is not encrpyted, or if its going an private WAN, per-AP GRE should work fine.
Just for future reference.
I tested the scenario of per IAP GRE tunnel with "automatic GRE config" and it work perfectly.
No more Guest MAC´s at the local switch :) .
Thanx all that replied.
@SethFiermonti wrote:If you can get the sceanrio to work with Distributed L3, I would use that instead. This provides a much more robust and scalable solution and while the routing is enabled, you really only have one northbound route out of the controller in the datacenter to worry about... With Centralized L2, at scale, the worry would be broadcasts and multicasts and having to traverse the WAN with that regard. if it's a small deployment, there shouldn't be any issues...but scale that out to 100s to 1000s and there will be.
A little late on the reply but wanted to share something.
Love D,L3 but the only problem with it is that if you want redundancy at the DC (2 controllers Active-Standby using VRRP), your D,L3 subnets are not shared between controllers, therefore branch IDs may change if your VRRP flips.
Instead of using VRRP, consider using individual primary and backup hosts in the IAP's config. In this way, the Branch ID allocation occurs with the primary only and then if the primary fails, everything continues to work OK on a primary controller and/or master IAP failure.
At Aruba, we believe that the most dynamic customer experiences happen at the Edge. Our mission is to deliver innovative solutions that harness data at the Edge to drive powerful business outcomes.
© Copyright 2020 Hewlett Packard Enterprise Development LPAll Rights Reserved.