I purchased a IBM BladeCenter for a number of our systems. It is a compact blade system that puts 14 servers in 7U.
My typical server config is a dual P4 3Ghz, 2Gig RAM, 2 x 40 Gb IDE drives and the machines come with a AMI IDE Raid card. The RAID card is very impressive in that it presents the OS with a single SCSI device, much nicer than the Promise cards etc.
Individual servers have dual gigabit Ethernet cards that goes out the back through dual Layer 7 Nortel switches. Obviously I wanted to bond these for high availability and load sharing
Read on for details on how this was done using RedHat Enterprise
First thing to know is that this stuff is in the kernel and there is a good doc in your kernel source tree under Documentation/networking/bonding.txt this has a lot more detail than I am going to provide here.
A virtual network interface gets created, bond0 in my case, this gets done in /etc/modules.conf
alias bond0 bonding
options bond0 miimon=100 mode=balance-rr
The above creates the bond0 interface and sets some options. It will check the MII state of the card every 100 milliseconds for state change notification. It will also use their round robin balancing policy. More on the various options for these and many more in bonding.txt
RedHat’s RC scripts support this bonding configuration without much modification though there aren’t any GUI tool to configure it. RedHat network config gets stored in /etc/sysconfig/network-scripts/ifcfg-int
You need to create a config file for the bond0 interface, ifcfg-bond0
DEVICE=bond0
BOOTPROTO=none
ONBOOT=yes
IPADDR=192.168.70.101
NETMASK=255.255.255.0
NETWORK=192.168.70.0
BROADCAST=192.168.70.255
GATEWAY=192.168.70.1
And for each network card that belongs to this group you need to modify the existing files to look more or less like this:
DEVICE=eth0
BOOTPROTO=none
ONBOOT=yes
TYPE=Ethernet
MASTER=bond0
SLAVE=yes
Once you created these for each of your ethernet cards you can reboot or restart your networking using service network restart and you should see something like this:
bond0 Link encap:Ethernet HWaddr 00:0D:60:9D:24:68 inet addr:192.168.70.101 Bcast:192.168.70.255 Mask:255.255.255.0 UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:58071 errors:0 dropped:0 overruns:0 frame:0 TX packets:1465 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:4315472 (4.1 Mb) TX bytes:120360 (117.5 Kb) eth0 Link encap:Ethernet HWaddr 00:0D:60:9D:24:68 UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:26447 errors:0 dropped:0 overruns:0 frame:0 TX packets:1262 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1992430 (1.9 Mb) TX bytes:95078 (92.8 Kb) Interrupt:16 eth1 Link encap:Ethernet HWaddr 00:0D:60:9D:24:68 UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:31624 errors:0 dropped:0 overruns:0 frame:0 TX packets:203 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2323042 (2.2 Mb) TX bytes:25282 (24.6 Kb) Interrupt:17
You can tcpdump the individual interfaces to confirm that traffic goes shared between them, weirdly though on my machine my tcpdump on eth0 and eth1 does not show incoming traffic just outgoing, dumping bond0 works a charm though.
To test it I just turned the power off to one of my switch modules, the networking dies for a couple of seconds but soon resumes without a problem. I am sure I could tweak the times a bit but for now this is all I need.
hmm, interesting.
this is something i now have to try out with our Dell blades 🙂
in effect, the hardware probably does something similar already in terms of performance (but not HA), as we only have two gigabit ports for the 8 blades in the server but each blade has two interfaces as well.
Thanks for this post. Was looking into setting this up on our mp3 server at work to increase the throughput… Found a spare network card lying around and just got it up and running.
Suprisingly enough this was the most detailed explaination i found, but it got the job done.
Cheers,
Matt
Hello !
Nice use of bonding, but dont forget that if You try to use it as a dhcp client (to assign an IP to the bonded interface), You have to provide a hardware adress to it !
Use one of the eth0 or eth1 for that !
Ciao !
I have a cisco switch. Do both connections have to be in the same switch? Can I connect 1 nic to switch A and one to switch B?
hey,
if you create a VLAN across the two cisco’s – by plugging them into each other, setting up a trunk connection (or two) and creating a VLAN that spans the two switches – you can plug 1 eth cable into each of the switches and it would work fine.
Have you ever tried multi-path routing. It sounds like it has the same result that bonding does (in a nutshell). I have used bonding and multi-path routing but, the thing that prevents me from continuing to use it is “round robin”. We are online with some apps that require consistent ip addresses. Do you know of a bonding or multi-path routing configuration that does not use a “round-robin” approach. Thanks jeff.
I have a little trouble with starting the link. There is a solution:
ifconfig bond0 192.168.70.101 netmask 255.255.255.0\
broadcast 192.168.70.255 up
ifenslave bond0 eth0
ifenslave bond0 eth1
without the ifenslave command, the routing is incomplete
i am facing problem while configuration of binding ethernet in linux
Hi was wondering if anyone could help me with bonding 2 or more adsl lines connected to cisco adsl routers then going into a debian box and from the debian box to my network. The idea is to bond the 2 ethernet cards that are connected to the 2 adsl cisco routers together and to create one fatpipe of bandwidth going out to my network. At the same time giving me load balancing and failover?
When I try this using a hub, I get duplicate packets on transmission and receving. Using a switch, the swich takes care of this, passing packets to only 1 interface and accepting packets from only 1 interface. Is this how it’s meant to work??
I’ve not used it on a environment with a Hub, but I’d say its normal for all interfaces to get the packets there since hubs dont isolate traffic, this might very well confuse how things works
i can’t understand any thing
How can you make the active port deterministic upon bootup?
But I do no want to use the ‘primary’ parameter because once a port fails over I want it to remain the active port until it fails.
I need to have eth0 always be the active port upon bootup because it is connected to the primary VRRP switch.
Hi, I need your help.. how to configure 2 NIC with 1 IP on RHEL5.. I use HP ML370 G5 server with 2 NIC.. and I want to use those NIC with 1 IP..