Select Page
NOTE: This is a static archive of an old blog, no interactions like search or categories are current.

I purchased a IBM BladeCenter for a number of our systems. It is a compact blade system that puts 14 servers in 7U.

My typical server config is a dual P4 3Ghz, 2Gig RAM, 2 x 40 Gb IDE drives and the machines come with a AMI IDE Raid card. The RAID card is very impressive in that it presents the OS with a single SCSI device, much nicer than the Promise cards etc.

Individual servers have dual gigabit Ethernet cards that goes out the back through dual Layer 7 Nortel switches. Obviously I wanted to bond these for high availability and load sharing

Read on for details on how this was done using RedHat Enterprise

First thing to know is that this stuff is in the kernel and there is a good doc in your kernel source tree under Documentation/networking/bonding.txt this has a lot more detail than I am going to provide here.

A virtual network interface gets created, bond0 in my case, this gets done in /etc/modules.conf

alias bond0 bonding
options bond0 miimon=100 mode=balance-rr

The above creates the bond0 interface and sets some options. It will check the MII state of the card every 100 milliseconds for state change notification. It will also use their round robin balancing policy. More on the various options for these and many more in bonding.txt

RedHat’s RC scripts support this bonding configuration without much modification though there aren’t any GUI tool to configure it. RedHat network config gets stored in /etc/sysconfig/network-scripts/ifcfg-int

You need to create a config file for the bond0 interface, ifcfg-bond0

DEVICE=bond0
BOOTPROTO=none
ONBOOT=yes
IPADDR=192.168.70.101
NETMASK=255.255.255.0
NETWORK=192.168.70.0
BROADCAST=192.168.70.255
GATEWAY=192.168.70.1

And for each network card that belongs to this group you need to modify the existing files to look more or less like this:

DEVICE=eth0
BOOTPROTO=none
ONBOOT=yes
TYPE=Ethernet
MASTER=bond0
SLAVE=yes

Once you created these for each of your ethernet cards you can reboot or restart your networking using service network restart and you should see something like this:

bond0     Link encap:Ethernet  HWaddr 00:0D:60:9D:24:68
inet addr:192.168.70.101  Bcast:192.168.70.255 Mask:255.255.255.0
UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
RX packets:58071 errors:0 dropped:0 overruns:0 frame:0
TX packets:1465 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:4315472 (4.1 Mb)  TX bytes:120360 (117.5 Kb)
eth0      Link encap:Ethernet  HWaddr 00:0D:60:9D:24:68
UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
RX packets:26447 errors:0 dropped:0 overruns:0 frame:0
TX packets:1262 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1992430 (1.9 Mb)  TX bytes:95078 (92.8 Kb)
Interrupt:16
eth1      Link encap:Ethernet  HWaddr 00:0D:60:9D:24:68
UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
RX packets:31624 errors:0 dropped:0 overruns:0 frame:0
TX packets:203 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2323042 (2.2 Mb)  TX bytes:25282 (24.6 Kb)
Interrupt:17

You can tcpdump the individual interfaces to confirm that traffic goes shared between them, weirdly though on my machine my tcpdump on eth0 and eth1 does not show incoming traffic just outgoing, dumping bond0 works a charm though.

To test it I just turned the power off to one of my switch modules, the networking dies for a couple of seconds but soon resumes without a problem. I am sure I could tweak the times a bit but for now this is all I need.