{"id":197,"date":"2004-11-26T17:28:55","date_gmt":"2004-11-26T16:28:55","guid":{"rendered":"http:\/\/wp.devco.net\/?p=197"},"modified":"2009-10-09T17:10:53","modified_gmt":"2009-10-09T16:10:53","slug":"linux_ethernet_bonding","status":"publish","type":"post","link":"https:\/\/www.devco.net\/archives\/2004\/11\/26\/linux_ethernet_bonding.php","title":{"rendered":"Linux ethernet bonding"},"content":{"rendered":"

I purchased a IBM BladeCenter<\/a> for a number of our systems. It is a compact blade system that puts 14 servers in 7U.<\/p>\n

My typical server config is a dual P4 3Ghz, 2Gig RAM, 2 x 40 Gb IDE drives and the machines come with a AMI IDE Raid card. The RAID card is very impressive in that it presents the OS with a single SCSI device, much nicer than the Promise cards etc.<\/p>\n

Individual servers have dual gigabit Ethernet cards that goes out the back through dual Layer 7 Nortel switches. Obviously I wanted to bond these for high availability and load sharing<\/p>\n

Read on for details on how this was done using RedHat Enterprise<\/a><\/p>\n

First thing to know is that this stuff is in the kernel and there is a good doc in your kernel source tree under Documentation\/networking\/bonding.txt this has a lot more detail than I am going to provide here.<\/p>\n

A virtual network interface gets created, bond0<\/i> in my case, this gets done in \/etc\/modules.conf<\/i><\/p>\n

\nalias bond0 bonding
\noptions bond0 miimon=100 mode=balance-rr\n<\/p><\/blockquote>\n

The above creates the bond0 interface and sets some options. It will check the MII state of the card every 100 milliseconds for state change notification. It will also use their round robin balancing policy. More on the various options for these and many more in bonding.txt<\/i><\/p>\n

RedHat’s RC scripts support this bonding configuration without much modification though there aren’t any GUI tool to configure it. RedHat network config gets stored in \/etc\/sysconfig\/network-scripts\/ifcfg-int<\/i><\/p>\n

You need to create a config file for the bond0 interface, ifcfg-bond0<\/i><\/p>\n

\nDEVICE=bond0
\nBOOTPROTO=none
\nONBOOT=yes
\nIPADDR=192.168.70.101
\nNETMASK=255.255.255.0
\nNETWORK=192.168.70.0
\nBROADCAST=192.168.70.255
\nGATEWAY=192.168.70.1\n<\/p><\/blockquote>\n

And for each network card that belongs to this group you need to modify the existing files to look more or less like this:<\/p>\n

\nDEVICE=eth0
\nBOOTPROTO=none
\nONBOOT=yes
\nTYPE=Ethernet
\nMASTER=bond0
\nSLAVE=yes\n<\/p><\/blockquote>\n

Once you created these for each of your ethernet cards you can reboot or restart your networking using service network restart<\/i> and you should see something like this:<\/p>\n

\n
bond0     Link encap:Ethernet  HWaddr 00:0D:60:9D:24:68\ninet addr:192.168.70.101  Bcast:192.168.70.255 Mask:255.255.255.0\nUP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1\nRX packets:58071 errors:0 dropped:0 overruns:0 frame:0\nTX packets:1465 errors:0 dropped:0 overruns:0 carrier:0\ncollisions:0 txqueuelen:0\nRX bytes:4315472 (4.1 Mb)  TX bytes:120360 (117.5 Kb)\neth0      Link encap:Ethernet  HWaddr 00:0D:60:9D:24:68\nUP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1\nRX packets:26447 errors:0 dropped:0 overruns:0 frame:0\nTX packets:1262 errors:0 dropped:0 overruns:0 carrier:0\ncollisions:0 txqueuelen:1000\nRX bytes:1992430 (1.9 Mb)  TX bytes:95078 (92.8 Kb)\nInterrupt:16\neth1      Link encap:Ethernet  HWaddr 00:0D:60:9D:24:68\nUP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1\nRX packets:31624 errors:0 dropped:0 overruns:0 frame:0\nTX packets:203 errors:0 dropped:0 overruns:0 carrier:0\ncollisions:0 txqueuelen:1000\nRX bytes:2323042 (2.2 Mb)  TX bytes:25282 (24.6 Kb)\nInterrupt:17\n<\/pre>\n<\/blockquote>\n

You can tcpdump the individual interfaces to confirm that traffic goes shared between them, weirdly though on my machine my tcpdump on eth0<\/i> and eth1<\/i> does not show incoming traffic just outgoing, dumping bond0<\/i> works a charm though.<\/p>\n

To test it I just turned the power off to one of my switch modules, the networking dies for a couple of seconds but soon resumes without a problem. I am sure I could tweak the times a bit but for now this is all I need.<\/p>\n","protected":false},"excerpt":{"rendered":"

I purchased a IBM BladeCenter for a number of our systems. It is a compact blade system that puts 14 servers in 7U. My typical server config is a dual P4 3Ghz, 2Gig RAM, 2 x 40 Gb IDE drives and the machines come with a AMI IDE Raid card. The RAID card is very […]<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_et_pb_use_builder":"","_et_pb_old_content":"","footnotes":""},"categories":[5],"tags":[19,26,32],"_links":{"self":[{"href":"https:\/\/www.devco.net\/wp-json\/wp\/v2\/posts\/197"}],"collection":[{"href":"https:\/\/www.devco.net\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.devco.net\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.devco.net\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.devco.net\/wp-json\/wp\/v2\/comments?post=197"}],"version-history":[{"count":1,"href":"https:\/\/www.devco.net\/wp-json\/wp\/v2\/posts\/197\/revisions"}],"predecessor-version":[{"id":731,"href":"https:\/\/www.devco.net\/wp-json\/wp\/v2\/posts\/197\/revisions\/731"}],"wp:attachment":[{"href":"https:\/\/www.devco.net\/wp-json\/wp\/v2\/media?parent=197"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.devco.net\/wp-json\/wp\/v2\/categories?post=197"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.devco.net\/wp-json\/wp\/v2\/tags?post=197"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}