Blogs

A Roaming Paradox

By Radzima posted Jul 12, 2016 12:00 PM

  

Recently in a discussion between myself and a few other engineers someone brought up the topic of disabling UNII-2e (channels 100-140) when using fast roaming in a voice deployment. The reasoning behind this is that when wireless clients cannot actively scan these channels due to FCC regulations around radar detection and avoidance (DFS), they will end up scanning and waiting for longer periods of time which in turn will lead to longer/delayed roaming. This makes sense given the restrictions and requirements surrounding that chunk of spectrum but the conversation quickly moved in two different directions… First, does this mean you should disable UNII-2e channels in a highly sensitive voice deployment to improve roaming throughout the network or is the theoretical increase in performance negligible compared to the added co-/adjacent channel interference added? Second, would the addition of 802.11k provide enough information to the clients so that they would scan smarter and faster; fast enough to mitigate the lag in roam times by finding the best neighbor in less time?

 

To Disable or Not to Disable?

In any wireless network, one of the most important factors to consider when designing is frequency reuse. There is a finite capacity to the spectrum we use. The more access points (and clients) you have contending for airtime and sharing that space, the closer you get to reaching that limit. The 2.4GHz band is a perfect example of this - 3 channels simply aren’t enough on their own with the density and performance requirements networks need to meet these days. This is why applications such as voice moved to the 5GHz frequency and why we’re relying so heavily on it to carry the load. Put simply, it’s a lot easier to achieve proper reuse and a seamless experience with 20+ channels than it is with 3. So other than the occasional popular device that doesn't support them, why then would we voluntarily want to give up nearly half of those channels? Well, proper design and wireless performance isn’t only about reuse.

 

Clients move. Sometimes clients move fast. The time it takes to roam between APs in a network while maintaining a strong and stable connection is as important in some environments as the performance when stationary. There are many things that can be done to improve roaming and it all comes down to making sure the client can quickly find and then just as quickly transition to the next AP. If a client isn't able to actively scan to find those neighbors, like on the DFS channels, that could be pretty tricky to pull off.

 

802.11k to the Rescue!

802.11k is a standard that was introduced several years ago but has been slow in client uptake. It introduced several features to enhance the radio resource management within a wireless network; specific to this topic is the feature that enables neighbor reporting. When a network and client support 802.11k (and it is enabled) the network will provide neighbor reports which contain bits of information about the access points around it. This can give the client a much better understanding of the environment and allow it to make better decisions when it comes time to roam. The second question in this discussion centered around whether or not 802.11k providing that list of neighbors, and the channels they are transmitting on, would allow a client to reduce passive scanning times by skipping all the channels where it has been told an AP isn’t going to be.

 

In a smaller environment this list will be smaller which could prove quite successful. On the other hand, in a large-scale, high density deployment, would it be beneficial or would the neighbor list simply contain too many APs on those passive-only channels to be worth while?

 

Final Thoughts

In my limited understanding of how this all works I think the answer lies somewhere in the middle, and here’s why: Smaller networks would most likely be able to avoid the whole problem and disable the channels. There is still enough spectrum even with that chunk removed to accommodate a well designed and smooth functioning environment. Larger networks on the other hand, may see benefits if there was a lot of purposeful design upfront to ensure the DFS channels are scattered far enough apart to limit neighbors on DFS channels in the report to a couple at most. I’m not sure where the line in the sand is between them though but it’s something I’d love to put to the test.

 

Unfortunately I don't have any answers to give here. I don’t have a network large enough to properly test either of these theories and I’m honestly not knowledgable enough about some of these standards, only just now digging in, to say either way. So I’m reaching out to the community, if you’d like to add to the discussion or you can provide the answers I’m looking for (even if the answer is a qualified “it depends”), please do. I’d love to learn more about the topic from anyone that might have some insight.

1 comment
3 views

Comments

Jul 12, 2016 12:27 PM

Why not just have the AP's perform the Radar detection and tell the clients what channel to switch to?

The clients should not be performaing active probes on DFS channels anyway.    

The only restriction would then be to have non-DFS channels (albeit with lower SNR), in every mix.   The network would have to be smart enough to then tell the client which DFS channel to switch to.

It seems like the additional capacity, from the additional channels, would be worth the management overhead.