Wired Intelligent Edge (Campus Switching and Routing)

Reply
Highlighted
New Contributor

Connect 2 Datacenter with 4 switches in rink topology without disabling interconnect

Hello everybody,

 

this is my first post, so sorry if this is the wrong place.

 

a customer will receive two Nimble Storage Systems that will run a sync between them.

To realize this, we will use 2x Aruba CX 8320 in a stack, in each datacenter that are connected to each other with 4 (or more) 10gbe. They should also have a uplink to the existing coreswitches (Aruba 5400R zl2), which are also connected to each other.

Now the idea was to only Tag the required VLAN for the Nimble sync on the interconnect between the two CX 8320 stacks, which should avoid a loop and with that the blocking of that interconnect via STP.

But i am not sure if that actually works.

Coreswitch1 <> Coreswitch2 (trk1 every vlan except 999)

Switchstack1 <> Switchstack2 (trk4 only vlan 999)

Switchstack1 <> Coreswitch1 ((trk2 every vlan except 999)

Switchstack2 <> Coreswitch2 ((trk2 every vlan except 999)

 

The end goal is to have every link active to reduce latency for the Nimble Sync and avoid bottlenecks for the server traffic.

 

Thank you for the help.


Accepted Solutions
Highlighted
Contributor II

Re: Connect 2 Datacenter with 4 switches in rink topology without disabling interconnect

Hi Tuezenli.

 

The use of only L2 connectivity you described may simplify the configuration but you still have to pay attention to STP.

How is the STP configured? Is it RSTP/MSTP with a single instance?

If the STP is configured using a single instance this topology will create a loop on the network and one of the paths, probably the LAG between the 8320 switches, will be blocked by STP.

The alternatives would be the use of a different instance for vlan 999 or if you prefer to continue using only one instance, disable/filter  STP on the LAG between 8320 switches, but you have to make sure that vlan 999 will not be allowed on the links to the core switches.

View solution in original post


All Replies
Highlighted
Aruba Employee

Re: Connect 2 Datacenter with 4 switches in rink topology without disabling interconnect

What you are describing is a pair of VSX switches in each DC connected by VSX LAG.

 

Take a look at the VSX Configuration Best Practices , the appendix has a site to site L2 extension use case.

Highlighted
MVP Guru

Re: Connect 2 Datacenter with 4 switches in rink topology without disabling interconnect

Yep, that's it. The latest edition (1.2, March 2020 but published today) of the VSX Best Practice Guide adds the Appendix F "VLAN extension between two VSX clusters" that could explain a possible solution (note: it requires at least 4 physical inter-sites links to properly connect both VSX Members at each site by using a VSX LAG).

Highlighted
New Contributor

Re: Connect 2 Datacenter with 4 switches in rink topology without disabling interconnect

Thank you both, this helps a lot.

 

Just to clarify it for me.

 

There is no plan on using OSPF (or any routing protocol) between the 8320 and the 5400R zl2 at the moment because on how the network is currently build. Also the 5400R zl2 are connected to each other and only one has the gateway attached. Those will be replaced in the future but it's not possible at the moment.

 

If i understand the document correctly (which i probably don't) the STP separation is realized by creating a separate network via OSPF routing between the routers/core switches and the VSX Pair on each datacenter, which i cannot do in this instance.

 

So with the document in mind, i would build something like this (simplified).:

network.jpg

 

Thank you again.

 

 

Highlighted
Contributor II

Re: Connect 2 Datacenter with 4 switches in rink topology without disabling interconnect

Hi Tuezenli.

 

The use of only L2 connectivity you described may simplify the configuration but you still have to pay attention to STP.

How is the STP configured? Is it RSTP/MSTP with a single instance?

If the STP is configured using a single instance this topology will create a loop on the network and one of the paths, probably the LAG between the 8320 switches, will be blocked by STP.

The alternatives would be the use of a different instance for vlan 999 or if you prefer to continue using only one instance, disable/filter  STP on the LAG between 8320 switches, but you have to make sure that vlan 999 will not be allowed on the links to the core switches.

View solution in original post

Highlighted
New Contributor

Re: Connect 2 Datacenter with 4 switches in rink topology without disabling interconnect

Thank you.

 

That was what i wasn't sure about. If i could avoid a loop with an additional STP instance.

 

We are currently using MSTP and i was going to put the vlan 999 on a different instance to avoid the loop but didn't know if that actually worked.

 

We will build it the recommended way in the future once the remaining switches are replaced. Its just not possible at the moment.

 

Thank you all for the help.

 

 

Highlighted
MVP Guru

Re: Connect 2 Datacenter with 4 switches in rink topology without disabling interconnect

Hi Davide, 

 

In the doc: "The example design contains four physical circuits for inter-site connectivity (two physical circuits can be used instead of four due to cost or fiber constraints)." 4 physical links is ideal case but not mandatory.

Highlighted
MVP Guru

Re: Connect 2 Datacenter with 4 switches in rink topology without disabling interconnect

VLAN separation is an efficient way.

Just make sure that VLAN 1 is excluded from the trunk carrying 999.

Highlighted
MVP Guru

Re: Connect 2 Datacenter with 4 switches in rink topology without disabling interconnect


@vincent.giles wrote: 4 physical links is ideal case but not mandatory.

You're correct. Since the OP is going to connect two VSX...yes it's possible to use just two physical links (that's the bare minimum to have a non degraded VSX LAG).

 

OTOH to truly stay on the safe side, I personally wouldn't deploy it by using less than four physical links where each VSX Member is connected to every and each VSX Member of the peer VSX at far-end...is that worth? it really depends (danger, risk and damage) on the level and duration of disruption one is able to accept...the Murphy Law's effects are always behind the nearest corner.

Search Airheads
cancel
Showing results for 
Search instead for 
Did you mean: