Netapp Aff Cluster Interconnect Switches

Posted on  by 

In this post, we will discuss NetApp Cluster-Mode hardware architecture. This includes various Components of NetApp cluster mode, how Netapp hardware components are connected to each other.

Then we will discuss the types of cluster mode and finally, we will see how data requests flow in NetApp cluster mode.

NetApp Cluster-Mode Architecture

There are three main hardware components of the NetApp cluster mode. We will discuss them in detail about them.

NetApp Cluster and Storage Switches NetApp offers “front-end” Fibre Channel switches that provide connectivity to host storage and “back-end” cluster switches that provide connections between two or more NetApp FAS/AFF controllers. These switches deliver internal communications and the ability to non-disruptively move data and network. There are 16 total ports and you will lose 4 to the switch to switch to switch ISL so you are left with 12. So, maximum, per pair of CN1610s you could get 6 HA pairs (12 nodes) However, check hwu as certain controller models, can require more than 1 cluster port per switch and could have limitations in mixing rules/cluster sizes. Simpler, faster, and more efficient, our newest entry-level and updated high-end AFF systems let you accelerate more of your data and move it in and out of the cloud without a hitch. And like the rest of our A-Series arrays, each is built on the latest NVMe technology at no extra cost. New entry-level AFF A250. The NetApp® HA pair controller configuration delivers a robust and high-availability data service for business-critical environments. Each of the two identical storage controllers within the HA pair configuration serves data independently during normal operation. In the event of individual storage controller failure the data service process transfers from the failed storage controller.

Only platforms that provide dedicated ports for switchless cluster interconnects are supported. Platforms such as FAS2750 and AFF A220 are not supported because MetroCluster traffic and MetroCluster interconnect traffic share the same network ports. Connecting local cluster connections to a MetroCluster compliant switch is not supported.

1. Nodes or Controllers

Nodes or controllers are the computing part of the NetApp Cluster. They process all incoming and outgoing requests for the cluster.

Netapp nodes have slots that contain IO/Module. These modules contain NetApp ports for Ethernet, FC, FCoE, and SAS connectivity.

Ethernet ports connect to network switches for LAN connectivity. Similarly, FC port connects to SAN switches for FC connectivity. To know more about the FC connectivity you can read or post on SAN Architecture.

You can use FCoE for both Ethernet and FC connections. Finally, SAS ports connect the controllers to disk shelves.

NetApp nodes also contain other hardware such as NVRAM, CPU, PSU, and Fans.

2. Disk Shelves

Disk shelves in Netapp Cluster Mode consist of Physical disks. NetApp supports various types of disk. Disk shelves and controller are connected to each other via SAS ports.

In an HA Pair, the disk shelves are connected to both controllers. So that if one controller fails then the other controller takes care of the data request.

3. Inter-Cluster Switches

Intercluster switches are also known as cluster switches. These are nothing but network switches that provides the connection between NetApp controllers.

Each node in the cluster has a minimum of two cluster ports which connects to cluster switches.

What Is Cluster-Mode NetApp?

Cluster Mode NetApp is basically a grouping of multiple HA Pair Nodes. These nodes are interconnected via Inter-Cluster Switches that creates a cluster of nodes.

There are three types of cluster mode. The first one is Single Node Cluster, Switchless Cluster, and Cluster with switches.

Netapp Cluster Mode can have a maximum number of 24 nodes that means 12 HA Pairs. However, in the case of FC or mixed protocol, it can have only 8 nodes.

Single Node Netapp Cluster

A Single node NetApp cluster has only one node. The single-node cluster does not provide any redundancy. Hence, there is always a risk of outage and data loss.

Two Node Switchless Cluster

Two nodes switchless cluster has two nodes in an HA pair. These nodes are connected to each other directly and not via inter-cluster switches.

The direct connection between two nodes for data flow is called Cluster Interconnect.

NetApp Cluster With Cluster Switch

You can also use intercluster switches to connect two nodes. Intercluster switches help in the scalability of the cluster.

Multiple Nodes with Inter-Cluster Switch

A NetApp Cluster can also have more than two nodes up to 24 nodes. All Nodes connect to two cluster switches. In the next section, we will discuss how IO flows through the network.

How Data Flows In NetApp Cluster Mode

Netapp cluster interconnect

First, read or write request comes to any one of the NetApp nodes. Data Ontap checks if the data resides on the local node or remote node.

If Data found in a local node then the requests are processed and sent back to the client through the same path. Now, if data exists in the remote node then Data Ontap sends the request to the remote node via cluster network.

Then remote node process the data and sends back processed and the data back to the client through the same path.

I hope you gain some knowledge of the hardware architecture of the NetApp cluster mode. Subscribe our newsletter for more such content. You can also subscribe to our YouTube channel for video tutorials.

NetApp has lots Knowledge Base articles to help configure these switches. I wanted to put a blog post together that arranges all info in one place that is easy to read. As delivered, the switch login is “admin” with an empty password (just hit enter!)

First, we need to get the switch on the network. Connect to the serial port (9800/N/8/1).

Login with username admin followed by “enter” twice (no password yet).

Enter privileged mode by typing “enable” followed by “enter” twice (no password yet).

Setup the “Service” port:

Example:

Verify the service port:

Ping the Gateway:

More than likely, you will need to update the FASTPATH code. To do that you need a SCP or TFTP server(see another post about this).

It is best to copy the current running firmware to the backup on the switch (although, if needed, the software can be downloaded from the NetApp Support Page for the CN1610:

You will need to confirm by pressing “y” and nothing else. Once that finishes, copy the image from your TFTP server to the active image:

The current images are 1.2.0.7 (with RCF 1.2) and 1.1.0.8 (with RCF 1.1) located at on this NetApp Support Page . Always verify version information with the NetApp Interoperability Matrix Tool (IMT).

Verify the boot image:

Reboot the switch:

When the switch finishes rebooting, create a “running-config.scr” file:

Place a backup copy off the switch and on the TFTP server. I like to add more to the off-switch name to make it easy to identify:

Copy the appropriate RCF to your switch:

Verify it made it on the switch:

Validate the script:

That will print each line and validate the script. If any commands are wrong or do not apply to the current FASTPATH version, the validation will indicate the line number where the issue(s) occurred.

Apply the script:

This will also print out each line in the script and notify that it was successful. Save the in memory running.

Check out the running configuration:

Set the passwords for standard and privilege mode:

If this is a new switch, there is no password; just hit enter. If you already assigned a password, enter the password at the prompt. Followup with the new password and then confirm the new password.

Enter enable mode and set the password:

(The Enable password should be empty so press enter. If not enter current password)

If this is a new switch, there is no enable password; just hit enter. If you already assigned an enable password, enter the password at the prompt. Followup with the new password and then confirm the new password.)

Save the running configuration:

Netapp Aff Cluster Interconnect Switches Manual

Reboot the switch:

Here are the commands to customize your configuration. All lines beginning with the “!” will be ignored by the switch. It is safe to copy/paste those lines without worry of error. Modify to fit your site as needed:

Now that you have the configuration in place, save it and upload it to your TFTP server:

Reference Links (warning, some links are only accessible to NetApp and Partners)

Unable to locate how to enable SSH for the CN1610 switches – 2018779
Unable to ping CN1610; however, ‘show network’ displays the correct IP address
OEM: How to configure the 10Gb NetApp CN1610 clustered Data ONTAP switch
How to configure e-mail alerts for CN1610 and CN1601

Netapp Cluster Interconnect

How to configure NTP services on the cluster interconnect switch CN1610
How to transfer firmware or script files to a NetApp CN1610 firmware using SCP
INTERNAL: How to disable SSH V1 on CN1610 cluster switches
How to configure SNMP Community String in Cluster Interconnect Switch CN1601/CN1610
How to disable telnet on a NetApp CN1610 switch
INTERNAL: How to configure TACACS

Coments are closed