Wireless Access

Reply
MVP
Posts: 498
Registered: ‎04-03-2007

Hash vs Even VLAN pooling results

Just wanted to share with the community that we have migrated all our primary vlan pools to using the even algorithm. I can attest that this algorithm is much more distributed and results in much less wasting of address space.

 

FYI for those that have been on the fence.

 

Cheers.

==========
Ryan Holland, ACDX #1 ACMX #1
The Ohio State University
MVP
Posts: 4,238
Registered: ‎07-20-2011

Re: Hash vs Even VLAN pooling results

What's the lease time you have assigned?

One thing that I noticed "maybe I'm wrong on this" is that sometimes devices would have 2 ip addresses since the controller at times moved the client to another VLANX after it already been placed in the user table with VLAN Y to keep the balance (even) across the VLANs ?
Thank you

Victor Fabian
Lead Mobility Engineer @ Integration Partners
AMFX | ACMX | ACDX | ACCX | CWAP | CWDP | CWNA
MVP
Posts: 498
Registered: ‎04-03-2007

Re: Hash vs Even VLAN pooling results

10min
==========
Ryan Holland, ACDX #1 ACMX #1
The Ohio State University
Occasional Contributor I
Posts: 6
Registered: ‎10-31-2014

Re: Hash vs Even VLAN pooling results

Ryan,

 

We are currently running the Even VLAN pooling algorithm.  I agree that it distributes IP addresses quite nicely.  In our enviornement we have 3500 clients average daily in a even VLAN pool with (7) /22 networks.  I most definitely like the fact that I dont have to worry about running out of IP addresses with this deployement.

 

But we are having an issue currently.  With the Even VLAN algorithm turned on as clients roam from AP to AP we have noticed that sometimes the clients will switch VLANs during the roam.  The client will often show a disconnect and then the user will have to disconnect and reconnect in order to reestablish the network connection.  

 

We are able to help the issue by enabling the preserve-VLAN setting but we are still seeing the issue from time to time.

 

We would like to use the Hash algorithm because the client will keep its same IP address while roaming.  With our current design we are affraid to turn on the hash algorithm though because we think there is a good possibility we could overrun one of the /22 DHCP network scopes because of the uncertainty of the Hash pooling algorith.

 

Have you seen any issues like this in your deployment?

 

If you have do you have any suggestions?

 

Thank you!

 

 

MVP
Posts: 498
Registered: ‎04-03-2007

Re: Hash vs Even VLAN pooling results

"Preserve VLAN? is a requirement of using the even vlan algorithm, so do make sure that is enabled.

With that feature enabled, the controller first looks for the client?s MAC address in the datapath bridge table. If an entry exists, the client is placed into that VLAN. Thus, the client DOES keep the same VLAN assignment if they?ve been online recently. (I *think* the bridge table is hardcoded to cache entries for 8hrs. Someone from Aruba can correct me if I?m wrong.) Retaining the same IP, however, is up to your DHCP server.

I would avoid hash if you have any fears of running out of non-RFC1918 address space. That?s based on our experience of >45,000 concurrent clients.
==========
Ryan Holland, ACDX #1 ACMX #1
The Ohio State University
Frequent Contributor I
Posts: 83
Registered: ‎10-27-2013

Re: Hash vs Even VLAN pooling results

Hi

 

Just to add to my experience thus far - though I think it is something for me to ask TAC to look at and advise - I might have missed something.

We have 2 Named Vlan Pools each with 3 vlans assigned to them. We have preserve client vlan enabled on the VAPs.

Upon implimentation we noticed that clients weren't being balanced across the vlans that well. Clients would join the vlans randomly and in no presice order or ballanced (this was when we had aproximately 60 clients) - when we reached our peak of around 900+ clients, 2 of the DHCP scopes (2 vlans) in the one named pool was full - in the other pool the one DHCP scope filled up.

The scopes that werent full in both the pools were utilised only 50%.  

Search Airheads
Showing results for 
Search instead for 
Did you mean: