{"id":1789,"date":"2010-09-22T09:11:19","date_gmt":"2010-09-22T08:11:19","guid":{"rendered":"http:\/\/www.devco.net\/?p=1789"},"modified":"2010-09-22T09:11:19","modified_gmt":"2010-09-22T08:11:19","slug":"experience_with_glusterfs","status":"publish","type":"post","link":"https:\/\/www.devco.net\/archives\/2010\/09\/22\/experience_with_glusterfs.php","title":{"rendered":"Experience with GlusterFS"},"content":{"rendered":"

I have a need for shared storage of around 300GB worth of 200×200 image files. These files are written once, then resized and stored. Once stored they never change again – they might get deleted. <\/p>\n

They get served up to 10 Squid machines and the cache times are huge, like years. This is a very low IO setup in other words, very few writes, reasonably few reads and the data isn’t that big just a lot of files – around 2 million.<\/p>\n

In the past I used a DRBD + Linux-HA + NFS setup to host this but I felt there’s a bit too much magic involved with this and I also felt it would be nice to be able to use 2 nodes a time rather than active-passive.<\/p>\n

I considered many alternatives in the end I settled for GlusterFS based on the following:<\/p>\n