Comware

 View Only
last person joined: 21 hours ago 

Expand all | Collapse all

4 slots IRF : only one management IP working (5700 FlexFabric)

This thread has been viewed 15 times
  • 1.  4 slots IRF : only one management IP working (5700 FlexFabric)

    Posted Nov 17, 2022 12:05 PM
    Hello,
    I'm an absolute network newbie (got in charge of managing this...), and trying to cope with the following issue :

    In a 4 slots IRF cluster, each management port (rear view of each switch) is linked to a dedicated switch.
    The setup is as follow :

    interface M-GigabitEthernet0/0/0
    ip address 10.100.3.10 255.255.255.0
     mad bfd enable
     mad ip address 10.100.3.11 255.255.255.0 member 1
     mad ip address 10.100.3.12 255.255.255.0 member 2
     mad ip address 10.100.3.13 255.255.255.0 member 3
     mad ip address 10.100.3.14 255.255.255.0 member 4

    As I understand this, 10.100.3.10 is a floating virtual ip, moving along the elections and life and death of each slot. Whatever happening to the slots, we always can ping this virtual ip, so it is cool but it is not the point of this post.

    I'd like to be able to ping (and ssh-access) every slot by its ip. But so far, only 10.100.3.11 is reachable, which I don't understand as nothing special is configured especially for this ip, neither for its physically linked port.

    To be honest, I don't even know why it's even needed to use mad to setup individual management IPs?

    Following the docs, I should be able to use :
    system-view
    interface M-GigabitEthernet1/0/0
    then set up the ip

    But I'm getting :
    [xxxxxx]interface M-GigabitEthernet1/0/0
    ^
    % Wrong parameter found at '^' position.

    So I guess I'm missing something in this setup.

    Is it at least even possible to setup individual ip for each management port of each slot?

    Thank you for your hints.

    Nicolas


  • 2.  RE: 4 slots IRF : only one management IP working (5700 FlexFabric)

    EMPLOYEE
    Posted Nov 17, 2022 01:00 PM

    Hi Nicolas,

    IRF is not a typical old-fashioned management clustering technology where every member has its own IP and you have just one common WebUI listening on a common for all switches IP address (added in addition to a unique IP per slot). IRF is a stacking technology where all physical members are treated as one chassis-based switch with two redundant MPUs where:

    Master switch's control plane becomes something like a Master MPU in a chassis
    All other switches' control plane becomes a sort of a single virtualised Standby MPU in that chassis
    All physical ports of each members becomes like virtual line cards in a chassis - ports on the first member are 1/0/* (like a line card in Slot 1 of a chassis), ports on the second member are 2/0/* (like a line card in Slot 2 of a chassis) etc.

    That is why you cannot have an IP address to access each particular member - it doesn't make any sense, it's like "I want to ssh to my linecard in Slot 3", it's just not possible. Each IRF member outsources its brain 😉 to the current Master, the only work that performs this "brain" (control plane) is syncing with the Master, listening to configuration and management commands coming from the Master and reporting various local events to the Master. Even if you get a console cable, connect it to a console port of, let's say member 3 which is a Standby at the moment, you will get the command prompt from the Master (member 1, I suppose in your case). The IP 10.100.3.10 will always belong to the current Master, no matter which IRF member is that.

    Regarding MAD. If you will get a split brain and suddenly let's say member 2 will become a new Master, but member 1 will keep its Master role, then suddenly BOTH IPs 10.100.3.11 AND 10.100.3.13 will become active, the BFD session between them will be established and it will be a clear sign of the split-brain situation. Then the member 1 will signal over the MAD session to member 3 to disable its ports and accept its destiny to obey and be Standby 🙂)) When a stack is healthy, from all that MAD IPs only one IP is active, therefore BFD session is DOWN, therefore it is a signal that everything is OK and there is only one Master aboard 🙂 

    So your expectations are wrong, you just need more time to accommodate yourself to all these new concepts.



    ------------------------------
    Ivan Bondar
    ------------------------------



  • 3.  RE: 4 slots IRF : only one management IP working (5700 FlexFabric)

    Posted Nov 18, 2022 03:07 AM
    Hello Ivan,

    First, thank you for your so comprehensive answer, which enlightens me on many points, especially about the MAD feature.
    As I'm not the guy who choose, bought and set up this cluster, I'm taking time to understand these principles.
    I've a solid 20+ years of sysadmin, including working with clusters of many kinds, but quite thin experience with specific network switches.
    In that respect, I was expecting to find a similar principle as an iDRAC or iLO is to a physical server (an out-of-band way to get an access, whatever is inside, dead or alive), but on the Flew Fabric switches.

    Nothing serious here, as I only have to cope with it and take into account what you explained (this cluster is - in the end - one and only one big appliance).
    The only point which is bothering me is that this square cluster is split in [site 1 : 2 slots] - [site 2 : 2 slots], yet linked with direct fiber links.
    But it means that in case of troubles (which did happen this year), I get no convenient way to get an access on each individual box, then I have to jump in my car and irritate Greta...

    Nicolas


  • 4.  RE: 4 slots IRF : only one management IP working (5700 FlexFabric)

    EMPLOYEE
    Posted Nov 18, 2022 04:14 AM
    Hi Nicolas,

    I'd consider using console server in this case. A console server per site, each with 2 serial links to every IRF member's console port. And whatever happens - you will be able to access each particular box and save some CO2 to make Greta happy 🙂

    ------------------------------
    Ivan Bondar
    ------------------------------