I want my guest network traffic to NAT out the two clustered controllers to the DMZ network (192.168.100.X/24), using the VRRP virtual-ip of the two (192.168.100.vrrp) as the IP address for egress. Currently, traffic NAT/PAT's correctly but with the IP of the controller on that DMZ network. I likewise have a guest network controller VRRP (10.100.0.vrrp) which serves as the default gateway for the guests on that subnet.
The reason this is needed is two-fold: the firewall will only need 1 entry for traffic filtering from the VRRP IP, and in case one controller fails the other one will take over without sessions being interrupted.
Any thoughts on how to achieve this?
Do you have a diagram of the setup to share? You can have a nat pool with a single ip.addr and in the acl for that guest traffic just have it src.nat with that pool. Is that what you are after?
As Michael noted, you could use the same IP for SNAT on both controllers. I wouldn't recommend this unless your controllers are running master/standby and only one controller can obtain the role of terminating AP's at a time.
If your using a master/local and get yourself in a situation where both controllers are terminating AP's (this could happen due to a number of reasons) then you may be finding yourself with broken traffic flows, since both controllers will SNAT the same IP. Even with IP mobility enabled you may get cases that the home agent is not on the controller with the active VRRP.
My recommendation for an L3 deployment would be to route the traffic and dont SNAT on the controller at all. This would leave the firewall to SNAT and you would have more insight to end users traffic. The other option would be to use L2 and move the default gateway to firewall.
With trying to preserve the client sessions in the firewall, I would target the design with the default gateway residing on firewall and extend the vlan to the aruba environment. The other factor that comes into play would be the preservation of clients IP. With most customers having HA firewalls, its easier to create the guest dhcp service on firewall, as this will also help preserve the IP amongst controller failures.
Thank you justink84, your explanation clarified the behavior I was seeing in my lab with nat-pool and vrrp setup. As I am using AOS 8.2 with clustered controllers in L2-Connected state, you are right that client traffic terminates on multiple controllers, and thus would cause routing/arp problems on the DMZ network once they src-nat'ed out.
Since client sessions are important here, we ultimately are going for a layer-2 solution. We are extending the guest network to the firewall, which will be the clients' default gateway.
Thank you for your recommendation!
Glad you were able to review in lab and it made sense with topology.
I was not aware at first you were using AOS 8 with clustered L2. In this type of environment its really intended to load balance clients and AP's amongst the pool of controllers.
Also troubelshooting SNAT on aruba can be a bit convoluted / difficult at times. When using NAT, its not the same as a standard flow based firewall as when your looking at the session table for client (source IP) your not going to see the response. You can look at session table for the destination IP, although there could be many sessions going to that address; which makes it harder to parse through. If you end up targeting based on source-ip, you will need to capture the source port and then look at global session table for that response for dst port.
As long as your able to preserve the clients IP address upon loosing a controller (dhcp is running on firewalls, or dedicated dhcp server). I dont see any real issues that you would need to worry about.
I have the same issue, but in my case the Aruba MCs are the firewalls and L3, so I simply must have this working with SNAT-to-VIP.
The client gateway IP on the MC cluster is also on a VIP with tracking enabled, therefore it should not matter where on the cluster each user's UAC is, because they will (hopefully) simply send their L3 traffic to the correct VRRP-Master and the L2 switch will carry it there. The goal is to maintain sessions if a single MC fails.
I've discovered a bug and currently working with TAC: as soon as the nat-pool is created (listing the VIP), this causes VRRP functionality to break on the VRRP-Backup. (The Backup begins responding to ARP requests for the VIP by using its own real MAC instead of allowing the Master to respond with the special VRRP-MAC). The problem is immediately resolved when the nat pool is deleted. (The nat pool doesn't need to be assigned to any policy, simply by existing it will trigger the issue)
Once I have that bug resolved, I plan to implement as per ryh's original plan.
At Aruba, we believe that the most dynamic customer experiences happen at the Edge. Our mission is to deliver innovative solutions that harness data at the Edge to drive powerful business outcomes.
© Copyright 2021 Hewlett Packard Enterprise Development LPAll Rights Reserved.