Archive for November, 2012

High Availability Postfix mail server on GlusterFS

The next article will be soon: High-available mail server on glusterfs.

– Two node CentOS Linux
– GlusterFS shared storage
– NFS share for mails on GlusterFS
– Postfix mail server with squirrelmail weblient
– Dovecot IMAP/POP server

 

So let’s get started.

In this article I used two local private nodes for testing.
You should change the IPs according to your real configuration. GlusterFS can manage different geo-locations to sync files/directories.
But if you want both servers at the same physical location then use a firewall for example pfSense or Snort and use local IPs behind the firewall.

GlusterFS part:

First edit the hosts file and insert all the nodes which will be in the cluster.

cat /etc/hosts
127.0.0.1    localhost    localhost.localdomain localhost4 localhost4.localdomain4
::1    localhost    localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.200    test2.local    test2
192.168.1.201    test3.local    test3

yum install glusterfs glusterfs-fuse glusterfs-server postfix dovecot

service glusterd start

gluster peer probe 192.168.1.201

gluster peer probe 192.168.1.200

On every node you should have the other nodes UUID peers.

ls /var/lib/glusterd/peers

878b63e8-5a3c-4746-984a-a14f4918c4b8

cat /var/lib/glusterd/peers/878b63e8-5a3c-4746-984a-a14f4918c4b8

uuid=878b63e8-5a3c-4746-984a-a14f4918c4b8
state=3
hostname1=test3.local

service glusterd status
glusterd (pid  1620) is running…

Start glusterd on other node too.
Then check glusterd status on both node:

gluster peer status (node1)
Number of Peers: 1

Hostname: test3.local
Uuid: 878b63e8-5a3c-4746-984a-a14f4918c4b8
State: Peer in Cluster (Connected)

gluster peer status (node2)
Number of Peers: 1

Hostname: test2.local
Uuid: 0d06c152-3966-4938-a1c4-84b624689927
State: Peer in Cluster (Connected)

Now let’s create the glusterfs volume.

Before you run the appropriate command be careful with sysctl! I had some trouble with: net.ipv4.ip_nonlocal_bind = 0 in sysctl.conf because I used the nodes for heartbeat and corosync to test them and I could not create glusterfs volume.
So change this from 1 to 0 in sysctl.conf and run sysctl -p to reconfigure this kernel parameter.

So create the volume:

gluster volume create gv0 replica 2 test2:/export/brick1 test3:/export/brick1

You could check the volume with this command too:

gluster volume info

Volume Name: gv0
Type: Replicate
Volume ID: da3d4c48-d168-4b4f-9590-e8d87cf5aa87
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: test2.local:/export/brick1
Brick2: test3.local:/export/brick1

Start the volume sharing with this command:

gluster volume start gvo

XFS part:

Next step install xfs modules.

modprobe xfs  (CentOS 6.3 already got installed kmod-xfs)

Create xfs file system on the extra disk that you want as a glusterfs volume.

mkfs.xfs -i size=512 /dev/vdb1

NFS part:

Then install nfs services.

yum install nfs-utils

And mount the nfs share as a glusterfs volume:

mount -o mountproto=tcp,vers=3 -t nfs test2.local:/gv0 /mnt/

Check the mounts:

mount
/dev/mapper/VolGroup-lv_root on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
/dev/vda1 on /boot type ext4 (rw)
/dev/vdb1 on /export/brick1 type xfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
nfsd on /proc/fs/nfsd type nfsd (rw)
test2.local:/gv0 on /mnt type nfs (rw,mountproto=tcp,vers=3,addr=192.168.1.200)

Start services automatically at boot:

chkconfig nfs on

chkconfig glusterd on

Postfix Part:

Create a symbolic link to /var under /mnt

 ln -s /var/ /mnt/

Then insert into /etc/postfix/main.cf to front of every refer that contains /var/ an extra /mnt/ like this:

From this: mail_spool_directory = /var/spool/mail
To this: mail_spool_directory = /mnt/var/spool/mail

And configure Postfix as usual.

Dovecot Part:

Change the default mail location in /etc/dovecot/conf.d/10-mail.conf

from this: mail_location = maildir:~/Maildir
To this: mail_location = mbox:~/mail:INBOX=/var/mail/%u

In this configuration dovecot will keep the mails in the old unix format not new dovecot format.
And you can reach the mails from both nodes.

Configure the rest of dovecot as usual.

In this setup you should have a shared mail system on nfs volume, so users should be able to reach their mails all the time whatever happens with the other nodes. The MX records configured to deliver mails to the second node if the first unreachable.
You need to use same unix users on both nodes otherwise the user boxes will be mixed and can’t be successful the whole setup.

 

 

 
Show Buttons
Hide Buttons