Guenadi N Jilevski's Oracle BLOG

Oracle RAC, DG, EBS, DR and HA DBA BLOG

Hardware solution for Oracle RAC 11g private interconnect aggregating

Hardware solution for Oracle RAC 11g private interconnect aggregating

For the interconnect to be optimal, it must be private with high bandwidth and low latency. Bandwidth requirements depend on several factors such as the number of CPU, CPU speed per node, number of nodes, type of workload OLTP or DSS etc. This is achieved using technical hardware solutions such as 1Gbps Ethernet, 10Gbps Ethernet or Infiniband and UDP or RDS transport protocols.   Interconnect redundancy is recommended for high availability, increased bandwidth and decreased latency. There are multiple vendor specific technical network hardware solutions providing higher bandwidth achieved by aggregation, load balancing, load spreading and failover of the interconnects. Let’s look at the different solutions and topologies for aggregation aimed  at achieving a higher throughput, bandwidth, higher availability and lower latencies as specified below.
·    Sun aggregation and mulipathing.
·    HP auto port aggregation.
·    IBM AIX etherchannel.
·    Cisco etherchannel.
·    Linux bonding.
·    Windows NIC teaming.

Sun aggregation and multipathing

Link aggregations consist of groups of Network Interface Cards (NICs) that provide increased bandwidth, higher availability and fault tolerance. Network traffic is distributed among the members of an aggregation, and the failure of a single NIC should not affect the availability of the aggregation as long as there are other functional NICs in the same group.
IP Multipathing (IPMP) provides features such as higher availability at the IP layer. Both IPMP and Link Aggregation are based on the grouping of network interfaces, and some of their features overlap, such as higher availability. These technologies are however implemented at different layers of the stack, and have different strengths and weaknesses. Link aggregations once created by dladm, behave like any other physical NIC to the rest of the system. The grouping of interfaces for IPMP is done using ifconfig. Link aggregations currently don’t allow you to have separate standby interfaces that are not used until a failure is detected. If a link is part of an aggregation, it will be used to send and receive traffic if it is healthy. Link aggregations are implemented at the MAC layer and require all the constituent interfaces of an aggregation to use the same MAC address. Since IPMP is implemented at the network layer, it doesn’t have that limitation. Link aggregations provide finer grain control on the load balancing desired for spreading outbound traffic on aggregated links. E.g. load balance on transport protocol port numbers vs. MAC addresses. Dladm  allows the inbound and outbound distribution of traffic over the constituent NICs to be easily observed. It’s also worth pointing out that IPMP can be deployed on top of aggregation to maximize performance and availability.

HP Auto Port Aggregation

Hewlett-Packard’s Auto Port Aggregation (APA) increases a server’s efficiency by grouping or “aggregating” multiple ports into a single link aggregate or fail-over group having a single IP address. Up to fifty aggregates per computer are permitted on HP-UX versions 11iv1 and 11iv2.
·    Load balancing – The server traffic load is distributed over each member of the link aggregate so that each individual link is used. No links are wasted as they would be under a “hot standby” mode of operation. HP Auto-Port Aggregation’s load balancing also attempts to maximize throughput over the links.
·    High throughput – Four 100Base-T links mean four x 100Mbps (400Mbps) in each direction or 800Mbps in both directions. A single HP Auto-Port Aggregation trunk containing four 1000Base-T links can handle eight Gigabits per second! This high throughput level is especially useful for bandwidth intensive applications.
·    Single IP address capability – HP Auto-Port Aggregation provides high throughput in multiples of 1000/100Mbps using a single IP address. It enables customers to transparently increase overall bandwidth without reconfiguring servers to add additional IP addresses and with no IP routing table modifications or adjustments to other network parameters. Single IP address capability means bandwidth growth without the work of modifying thousands of IP addresses and with no need to reconfigure sensitive parameters inside the network.

AIX etherchannel

EtherChannel and IEEE 802.3ad Link Aggregation are network port aggregation technologies that allow several Ethernet adapters to be aggregated together to form a single pseudo Ethernet device. For example, ent0 and ent1 can be aggregated into an EtherChannel adapter called ent3; interface ent3 would then be configured with an IP address. The system considers these aggregated adapters as one adapter. Therefore, IP is configured over them as over any Ethernet adapter.
All adapters in the EtherChannel or Link Aggregation are given the same hardware (MAC) address, so they are treated by remote systems as if they were one adapter. Both EtherChannel and IEEE 802.3ad Link Aggregation require support in the switch so these two technologies are aware which switch ports should be treated as one.
The main benefit of EtherChannel and IEEE 802.3ad Link Aggregation is that they have the network bandwidth of all of their adapters in a single network presence. If an adapter fails, network traffic is automatically sent on the next available adapter without disruption to existing user connections. The adapter is automatically returned to service on the EtherChannel or Link Aggregation when it recovers. EtherChannel satisfies the large bandwidth RAC interconnect requirement. An EtherChannel with multiple links is extremely beneficial for a RAC cluster.  Although multiple gigabit networks can be connected between nodes without using EtherChannel as an interconnect, only one network link can be used by RAC as the private interconnect at any one time. The remaining network links can only be used as standbys for failover purposes. That is, as long as the primary RAC interconnects is still alive, the rest of the backup networks will never be used. The EtherChannel round robin algorithm aggregates all the link bandwidths and satisfies the large bandwidth requirement problem. RAC can use EtherChannel if specified under cluster_interconnects. If the IP address of the EtherChannel is specified as the setting of init.ora cluster_interconnects for each instance RAC will use it. An EtherChannel configured with multiple links has built-in high availability. As long as there is one link available, the Ethernet will continue to function. If we have a 2-link EtherChannel  when we disconnect  one of the gigabit links under cache fusion traffic, EtherChannel stays up and so did the RAC instances. The link failure is only reported in the AIX error report errpt. EtherChannel with more than 2 links may provide better availability. An EtherChannel with multiple links using a round robin algorithm and aggregate network bandwidth provides better network performance and availability than the two private interconnects and one public network scheme.

Cisco EtherChannel Benefits

Cisco EtherChannel technology provides a solution for enterprises requiring higher bandwidth and low latency between servers, routers, and switches than a single-link Ethernet technology can provide. Cisco EtherChannel technology provides incremental scalable bandwidth and the following benefits:
·    Standards based—Cisco EtherChannel technology builds upon IEEE 802.3-compliant Ethernet by grouping multiple, full-duplex point-to-point links together. EtherChannel technology uses IEEE 802.3 mechanisms for full-duplex autonegotiation and autosensing, when applicable.
·    Multiple platforms—Cisco EtherChannel technology is flexible and can be used anywhere in the network that bottlenecks are likely to occur. It can be used in network designs to increase bandwidth between switches and between routers and switches—as well as providing scalable bandwidth for network servers, such as large UNIX servers or PC-based database servers.
·    Flexible incremental bandwidth—Cisco EtherChannel technology provides bandwidth aggregation in multiples of 100 Mbps, 1 Gbps, or 10 Gbps, depending on the speed of the aggregated links. For example, one can deploy EtherChannel technology that consists of pairs of full-duplex Fast Ethernet links to provide more than 400 Mbps bandwidth. Bandwidths of up to 800 Mbps can be provided between servers and the network backbone to provide large amounts of scalable incremental bandwidth.
·    Load balancing—Cisco EtherChannel technology is composed of several Fast Ethernet links and is capable of load balancing traffic across those links. Unicast, broadcast, and multicast traffic is evenly distributed across the links, providing higher performance and redundant parallel paths. When a link fails, traffic is redirected to the remaining links within the channel without user intervention and with minimal packet loss.
·    Resiliency and fast convergence—When a link fails, Cisco EtherChannel technology provides automatic recovery by redistributing the load across the remaining links. When a link fails, Cisco EtherChannel technology redirects traffic from the failed link to the remaining links in less than one second. This convergence is transparent to the end user—no host protocol timers expire, so no sessions are dropped.
·    Transparent to network applications—Cisco EtherChannel technology does not require changes to networked applications. When EtherChannel technology is used within the campus, switches and routers provide load balancing across multiple links transparently to network users. To support EtherChannel technology on enterprise-class servers and network interface cards, smart software drivers can coordinate distribution of loads across multiple network interfaces.
·    100 Megabit, 1 Gigabit, and 10 Gigabit Ethernet-ready—Cisco EtherChannel technology is available in all Ethernet link speeds. EtherChannel technology allows network managers to deploy networks that will scale smoothly with the availability of next-generation, standards-based Ethernet link speeds.

Linux IP Bonding

Linux IP Bonding can be used to create a virtual NIC that runs over multiple physical NICs. Packets sent out over the virtual NIC can then be load balanced across the physical NICs. This should increase performance as we can now theoretically double or triple the available bandwidth.  More information on  IP bonding can be found at:
http://www.redhat.com/docs/manuals/enterprise/RHEL-4-Manual/ref-guide/s1-modules-ethernet.html
The Linux bonding driver provides a method for aggregating multiple network interfaces into a single logical bonded interface. The behavior of the bonded interfaces depends on the mode; generally speaking, modes provide either hot standby or load balancing services.

Windows NIC teaming

Windows allows aggregation using third party products know as NIC teaming.

December 13, 2009 - Posted by | oracle

1 Comment »

  1. Reblogged this on Andrey Znamenskiy and commented:
    Hardware solution for Oracle RAC 11g private interconnect aggregating

    Comment by andreyznamenskiy | January 16, 2015 | Reply


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: