Linux server migration with VMware converter

The next post will show you how to migrate a Live Linux/Windows machine from any source to any destination remotely.

I’ll do this on website and post all related pictures regarding to this migration.
I am going to use VMware converter which will deal with everything.


– Running VMware server

– Source machine with SSH or RDP connection (Linux/Windows)

– Destination VMware server

– VMware converter (free to download from VMware site)

Ok let’s start up the Vmware converter and connect up to the source machine.

Here you need to select source type as Powered on machine.
Also you need to add the login name and password.




At the next tab you need to type the destination machine’s IP address and the login details also.




Then at the next tab you must add the machine name.
On Linux this will be picked up from the host file automatically, you could leave it as it is if you prefer.



At the next step you must be really careful with the machine version number.
VMware automatically offers Version 10 which can only be managed from VCenter and that is not free.
So change this to version 8 or lover then you will be able to manage the machine from ESXi vSphere client.
This version means the machine hardware type. I use version 8 which is the highest available free version with out any licensing issue and cost.



Also on the destination page you should choose which datastore you want to use for the machine.

At the next tab converter will ask you the final parameters regarding to the conversion.
Here you must edit the Helper VM network tab and add an extra IP on the local network where the destination VMware server is.
With out this usually the converter dies at around 1% or 2% with out any extra notification.





Also you should check the option at advanced option. Reconfigure destination virtual machine should be ticked.
This will fix the initramdisk on the destination machine.


After this we can start the real migration process.



This will create the machine at the destination server and automatically starts it up and start pulling down the data from the source machine.


Destination server console with the running machine while it’s pulling down the data from source:


You can see the progress is quiet quick, it’s depend on the actual network speed and the source and destination machine CPU and disk speed.


At the source machine VMware converter uses tar command to compress the full disk into an image and send it through via the network to the destination machine as a compressed file.
The source gets overloaded a bit wit this process, but of course it’s depend on the source box. This is not a real machine just only a virtual one with 1 CPU socket and 512 RAM.
So this is definitely not a strong box and it runs other web sites at the moment, so that’s why the load is high on top.


Now let’s see the finished converted machine:


Indeed it reached the final stage and pull down the whole machine is about an hour.
The destination machine currently switched off on the destination server, but it contains the full copy of the source.
So let’s see if it boots up with out any issues:


Seems like we got a kernel panic because of the local disk UUID was missing.
So right now we will need CentOS disk and boot up the box in rescue mode to fix the disk UUID.
Upload the CentOS version that you used on the source box and add it to VMware server.
I’ve got 64 bit on 7layer, so I’ll use that to fix the destination machine.




Boot up the machine but be quick you will have only about 1 sec to press escape at the BIOS screen and choose to boot from CDROM.
At the CentOS boot screen then you should choose Rescue installed system option to fix the box.



Then choose the continue option and then this give you the option to modify the root file system

Then at the next screen you can see the root system mounted under /mnt/sysimage.




Then choose the shell screen menu.





Then now we have all the root system mounted, so we can check the fstab and boot entries on the box.
Check the grub device map /boot/grub/ and /boot/grub/grub.conf regarding to the HDD type.
Also check the fstab /etc/fstab if it’s correct.

Old fstab:



New fstab corrected by VMware at converting:


Also device map looks correct:


After we checked these we need to fix the grub loader and the initramdisk.

This procedure can be found on VMware site also:

Rebuild initramsdisk:


mkinitrd -v -f /boot/initramfs-2.6.32-431.29.2.el6.x86_64.img 2.6.32-431.29.2.el6.x86_64

The line should matches with your grub kernel config. So check it in /boot directory.
This takes about 5-10 sec to rebuild the whole initramdisk.

Also we need to correct grub boot loader disk UUID. Either follow one of these steps below:

Correcting grub loader with UUID change:

Run ls -l /dev/disk/by-uuid to check the correct UUID for the sda3 disk.

This is a very long line and easy to make mistakes here, so it’s better to be added with ls command to grub.conf and then move it to the correct place.

ls -l /dev/disk/by-uuid >> /boot/grub/grub.conf then move it from the end.

In this case the last line contains the correct UUID which is /dev/sda3.
So move this to root=UUID=  line





Correcting grub loader with changing only kernel line in /boot/grub/grub.conf:


So now you can run grub-install with the corrected disk name:



When it finished try to reboot the box. Usually when you try to reboot ot halt commands here in rescue mode wont work.
Use from the VMware machine top menu ==>> VM ==>> Guest ==>> Send Ctrl+Alt+del tab to reboot the rescue disk.
And wait to reboot the box and see if it works fine.




Voila it’s booting up! 🙂

If you have trouble with /lib/modules/{kernel-number}/modules.dep at boot then you need to rebuild the initramdisk again.
Try to investigate via VMware site and carefully check the initramdisk name and kernel name at the boot directory.

I’ll mark the important parts which should be corrected otherwise the kernel wont boot:


[ grub]# cat grub.conf | head -n 20

# Hetzner Online AG – installimage
# GRUB bootloader configuration file

timeout 5
default 0

title CentOS (2.6.32-431.29.2.el6.x86_64)
root (hd0,1)
kernel /vmlinuz-2.6.32-431.29.2.el6.x86_64 ro root=UUID=c8fbeb09-a9d6-449f-8a99-6f83b7cf4362 rd_NO_LUKS rd_NO_DM nomodeset crashkernel=auto SYSFONT=latarcyrheb-sun16 LANG=en_US.UTF-8 KEYTABLE=de
initrd /initramfs-2.6.32-431.29.2.el6.x86_64.img


cat menu.lst | head -n 20
# Hetzner Online AG – installimage
# GRUB bootloader configuration file

timeout 5
default 0

title CentOS (2.6.32-358.6.1.el6.x86_64)
root (hd0,1)
kernel /boot/vmlinuz-2.6.32-358.6.1.el6.x86_64 ro root=UUID=c8fbeb09-a9d6-449f-8a99-6f83b7cf4362 rd_NO_LUKS rd_NO_DM nomodeset
initrd /boot/initramfs-2.6.32-358.6.1.el6.x86_64.img


(hd0) /dev/sda





SPF record setup for mail server

How to set up and test SPF record for mail server:

Let’s check Google’s SPF record first with dig command.

[root@mail ~]# dig txt

; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.30.rc1.el6_6.1 <<>> txt
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 52169
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0


;; ANSWER SECTION: 3599 IN TXT “v=spf1 ip4: ip4: ~all”

;; Query time: 12 msec
;; WHEN: Fri Feb 27 08:47:05 2015
;; MSG SIZE rcvd: 116

[root@mail ~]#

In the answer section you can see the IP addresses. These are the servers which allowed to send mails via
So you have your domain name e.g. and you have your mail server on it with an A record This server can send mails for its own name, but any other servers are not allowed to send mails. With the SPF record, you can send mail from the IP address via google’s mail server. So server and 72 can send mails (relay) via google’s mail server.

Also you can use domain names in SPF record and tell the server to use that instead of the IP address.

[root@mail ~]# dig txt

; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.30.rc1.el6_6.1 <<>> txt
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 64785
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0


;; ANSWER SECTION: 21599 IN TXT “v=spf1 ip4: ~all” 21599 IN TXT “v=DMARC1\; p=none\; adkim=r\; aspf=r\; sp=none”

;; Query time: 36 msec
;; WHEN: Fri Feb 27 08:59:31 2015
;; MSG SIZE rcvd: 225

[root@mail ~]#


Create and check SPF records:

Header check for emails to analyse SPF and other issues:



MIPS Development

MIPS related development board from Imagination technology.

I just received my new IC20 MIPS based development board from Imgtec and I must say this is a piece of engineering art! 🙂

Already setup Apache2 web server, ssh-server, My-SQL server and also basic firewall (with iptables) and a Postfix mail server.
I’m going to do an SMS server and hook it up to my CCTV system and publish everything about it shortly.

Thank you again Imgtec!

Related links to CI20 and other Linux based hardware development:

Debian Distro Upgrade

So let's make our hands dirty with some Debian Linux distro update!
It happened to be this week I have received a complaint against one of our server, which had some dodgy outdated PHP packages installed on it.
I had to investigate that what has happened with the box and fix the issue.
I figured out it has Debian lenny installed on it, which considered quiet old and end of life support.
For this release has no security update since 2012, so it must be updated to never release to fix this issue.
Although this box is behind a firewall, but still it's dangerous to have an outdated box sitting on the net.
So I had to do full distro update on the box, which will follow here:
First I installed the latest packages from the original distro, which was lenny.
After the update I rebooted the box and changed the source to squeeze:
# nano /etc/apt/sources.list
deb wheezy main contrib non-free
deb-src wheezy main contrib non-free
deb wheezy/updates main contrib non-free
deb-src wheezy/updates main contrib non-free
Then I started the upgrade process like this:
# aptitude update
# aptitude safe-upgrade
# aptitude dist-upgrade
Follow the instructions by the aptitude, it will asks what you want to do with the conflicting packages.
For example php.ini has a modified version, then what to do?
Keep the current modified version or use the provided one by the distro?
Sometimes you need to use the distro provided config file otherwise the service wont be able to start up.
For example I kept the mysql-server config and the new version could not start up.
So I replaced to the new one and modified the config with some old settings and viola it started up just fine.
So to do upgrade from lenny to wheezy you must upgrade first to squeeze, then to wheezy:
lenny -> squeeze -> wheezy
Be patient and prepare few good coffee for the upgrade, because it will take some time!

VMware free backup solution from virtuallyGhetto


VMware free backup solution for ESXi servers:

Download the script from github: then modify it for your system to fit in.

I’m going to explain the important parts that I usually change in this script:

In file:

– Backup path
– Rotation
– Backup format
– Email server
– Email to
– Email from


# directory that all VM backups should go (e.g. /vmfs/volumes/SAN_LUN1/mybackupdir)


# Format output of VMDK backup
# zeroedthick
# 2gbsparse
# thin
# eagerzeroedthick


# Number of backups for a given VM before deleting

Also in ghettoVCB.conf







When you uploaded the and ghettovcb.conf files you need to add execute flag to the file:

chmod +x

Then you can start backing up your VM machines.

Backup only one machine run this:

./ -m vm_to_backup

Backup all machines:

./ -a

If you want to machine to be avoided from backup then use an except file to achive this:
./ -a -e vm_exclusion_list

VMware firewall wont let allow to send outgoing emails from the script, this need to be fixed.
Upload smtp.xml file to VMware server and update the firewall, with out this you will receive an error on VMware ssh console.

Script:  smtp.xml

Upload it to /etc/vmware/firewall and run esxcli update:

esxcli network firewall refresh

Then click on server name, configuration, security profile and you will see the new smtp outbond port appeared as a new outgoing firewall rule to allow smtp outgoing traffic from the server.




Altough you can use the restore script: to restore machines from backup, but you can use them straight away when you add to your machine from the backup script.
This is much quicker then the restore process, but obviously the machine will reside on the backup path not on the original path. With this you can get back the machine ASAP, then create a backup onto the original path and shut down the backup path machine and add to the inventory the original path machine and start it up.


Only one thing left to do is to make this process be automatic.
Edit crontab on your server and add this to it:
10 00 * * 1-5 /vmfs/volumes/ -f /vmfs/volumes/Fuji-NAS/backuplist > /vmfs/volumes/ghettoVCB-backup-$(date +\%s).log

Crontab file located on VMware ESXi 5.5 at: /var/spool/crontabs/ and root file contains the current configuration for crontab.

cat /var/spool/cron/crontabs/root

#min hour day mon dow command
1 1 * * * /sbin/
1 * * * * /sbin/
0 * * * * /usr/lib/vmware/vmksummary/
*/5 * * * * /sbin/hostd-probe ++group=host/vim/vmvisor/hostd-probe
10 00 * * 1-5 /vmfs/volumes/datastore1/root/opt/ -f /vmfs/volumes/datastore1/root/opt/vmbackup.txt

This will run backup on every day at 10'o clock, but you can change it according to your needs.


NAS4Free High available iSCSI failover VMware server. 

The following post will be how to install and set up NAS4Free server for your ESXi/ESX VMware server as an iSCSI storage.
NAS4Free is based on FreeBSD and has all the required services to serve your system as a High-Available Storage server. (HAST and CARP)
Of course you can use this solution in your network as a High-Available storage or as a Windows cifs samba server, if you modify the services on NAS4Free.
I’ll stick first to the iSCSI setup and later we will show you how to set up NFS and Windows(SAMBA) shares.

The following setup used here:

Node1 primary IP address for serving iSCSI and CARP services:
Node1 secondary IP address for HAST synchronisation:

Node2 primary IP address for serving iSCSI and CARP services:
Node2 secondary IP address for HAST synchronisation:

Virtual IP address(CARP address) for iSCSI service:

Node1 host name: has1
Node2 host name: has2

Install both nodes with lates NAS4Free edition.

– Change node names according to your set up for example: node1 and node2.


– Add node names to host file on both nodes.

– Setup carp services under Network/Interface management:


– Advertisement skew on has1 node: 0
– Advertisement skew on has2 node: 10

If has1 node dies then has2 node will take over all the services.


You must use same link up and link down action on both side of the nodes otherwise the switch over wont work properly!
So everything should be the same except the advertisement skew value.

Next step setup HAST services:



As you can see here the second network interface card used for the HAST service synchronisation not the main interface.
After you setup HAST service reboot both nodes, the apply wont help to start the services for some reason. 

– Switch on ssh service and ssh into both nodes.

On Master issue these commands:

hastctl role init disk1
hastctl create disk1
hastctl role primary disk1

On Slave issue these commands:

hastctl role init disk1
hastctl create disk1
hastctl role secondary disk1

Check both nodes with: hastctl status

Then configure ZFS
On Master:

Add disks (Disks->Management)

disk1: N/A (HAST device)
Advanced Power Management: Level 254
Acoustic level: Maximum performance
S.M.A.R.T.: Checked
Preformatted file system: ZFS storage pool device

Format as zfs (Disks->Format)

Add ZFS Virtual Disks (Disks->ZFS->Pools->Virtual Device)

Add Pools(Disks->ZFS->Pools->Management)

Add PostInit script on both nodes to /system/advanced/command scripts/ tab.
/usr/local/sbin/carp-hast-switch slave

Shut down the master and on the slave import the pool through the GUI.  Tab: /ZFS/Configuration/Detected
Then synchronise the pool on the slave!

When finished on slave, start master and switch VIP back to master.

zpool status disk1
hastctl status

Troubleshooting commands from SSH terminal:

zpool status

nast1: ~ # zpool status mvda0
  pool: mvda0
 state: UNAVAIL
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run ‘zpool clear’.
  scan: none requested

        NAME                   STATE     READ WRITE CKSUM
        mvda0                  UNAVAIL      0     0     0
          2144332937472371213  REMOVED      0     0     0  was /dev/hast/hast

If status unavailable then you could try:

zpool clear “pool name”

It will scan and scrub the local disks.

nast1: ~ # zpool status mvda0
  pool: mvda0
 state: ONLINE
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
  scan: scrub in progress since Mon Jun  2 15:26:25 2014
        1.19G scanned out of 1.43G at 28.3M/s, 0h0m to go
        0 repaired, 82.75% done

        NAME         STATE     READ WRITE CKSUM
        mvda0        ONLINE       0     0     0
          hast/hast  ONLINE       0     0     0

Then check pool again:
zpool status

nast1: ~ # zpool status
  pool: mvda0
 state: ONLINE
  scan: scrub repaired 0 in 0h0m with 0 errors on Mon Jun  2 15:27:17 2014

        NAME         STATE     READ WRITE CKSUM
        mvda0        ONLINE       0     0     0
          hast/hast  ONLINE       0     0     0

Recreate sync on disks or split brain:

On Master issue these commands:

hastctl role init disk1
hastctl create disk1
hastctl role primary disk1

On Slave issue these commands:

hastctl role init disk1
hastctl create disk1
hastctl role secondary disk1

If you lost sync because of disk error or network error then you could recreate the sync between the hast disk(s).
Just recreate the roles and the nodes will start syncing the data. (use commands above)  Be careful with the roles and the nodes, don’t mix them up!
If you recreate the roles and the disks, you wont lose data at all. It will only start synching the disk(s) bt wont overwrite data.

If it a split brain scenario then you should decide which node has the newer data and issue the above commands according to the data. So for example if the secondary node has newer data then the primary then obviously you should issue: role primary on the second node and role secondary on the primary node and vica-versa.

High Availability Postfix mail server on GlusterFS

The next article will be soon: High-available mail server on glusterfs.

– Two node CentOS Linux
– GlusterFS shared storage
– NFS share for mails on GlusterFS
– Postfix mail server with squirrelmail weblient
– Dovecot IMAP/POP server


So let’s get started.

In this article I used two local private nodes for testing.
You should change the IPs according to your real configuration. GlusterFS can manage different geo-locations to sync files/directories.
But if you want both servers at the same physical location then use a firewall for example pfSense or Snort and use local IPs behind the firewall.

GlusterFS part:

First edit the hosts file and insert all the nodes which will be in the cluster.

cat /etc/hosts    localhost    localhost.localdomain localhost4 localhost4.localdomain4
::1    localhost    localhost.localdomain localhost6 localhost6.localdomain6    test2.local    test2    test3.local    test3

yum install glusterfs glusterfs-fuse glusterfs-server postfix dovecot

service glusterd start

gluster peer probe

gluster peer probe

On every node you should have the other nodes UUID peers.

ls /var/lib/glusterd/peers


cat /var/lib/glusterd/peers/878b63e8-5a3c-4746-984a-a14f4918c4b8


service glusterd status
glusterd (pid  1620) is running…

Start glusterd on other node too.
Then check glusterd status on both node:

gluster peer status (node1)
Number of Peers: 1

Hostname: test3.local
Uuid: 878b63e8-5a3c-4746-984a-a14f4918c4b8
State: Peer in Cluster (Connected)

gluster peer status (node2)
Number of Peers: 1

Hostname: test2.local
Uuid: 0d06c152-3966-4938-a1c4-84b624689927
State: Peer in Cluster (Connected)

Now let’s create the glusterfs volume.

Before you run the appropriate command be careful with sysctl! I had some trouble with: net.ipv4.ip_nonlocal_bind = 0 in sysctl.conf because I used the nodes for heartbeat and corosync to test them and I could not create glusterfs volume.
So change this from 1 to 0 in sysctl.conf and run sysctl -p to reconfigure this kernel parameter.

So create the volume:

gluster volume create gv0 replica 2 test2:/export/brick1 test3:/export/brick1

You could check the volume with this command too:

gluster volume info

Volume Name: gv0
Type: Replicate
Volume ID: da3d4c48-d168-4b4f-9590-e8d87cf5aa87
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Brick1: test2.local:/export/brick1
Brick2: test3.local:/export/brick1

Start the volume sharing with this command:

gluster volume start gvo

XFS part:

Next step install xfs modules.

modprobe xfs  (CentOS 6.3 already got installed kmod-xfs)

Create xfs file system on the extra disk that you want as a glusterfs volume.

mkfs.xfs -i size=512 /dev/vdb1

NFS part:

Then install nfs services.

yum install nfs-utils

And mount the nfs share as a glusterfs volume:

mount -o mountproto=tcp,vers=3 -t nfs test2.local:/gv0 /mnt/

Check the mounts:

/dev/mapper/VolGroup-lv_root on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
/dev/vda1 on /boot type ext4 (rw)
/dev/vdb1 on /export/brick1 type xfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
nfsd on /proc/fs/nfsd type nfsd (rw)
test2.local:/gv0 on /mnt type nfs (rw,mountproto=tcp,vers=3,addr=

Start services automatically at boot:

chkconfig nfs on

chkconfig glusterd on

Postfix Part:

Create a symbolic link to /var under /mnt

 ln -s /var/ /mnt/

Then insert into /etc/postfix/ to front of every refer that contains /var/ an extra /mnt/ like this:

From this: mail_spool_directory = /var/spool/mail
To this: mail_spool_directory = /mnt/var/spool/mail

And configure Postfix as usual.

Dovecot Part:

Change the default mail location in /etc/dovecot/conf.d/10-mail.conf

from this: mail_location = maildir:~/Maildir
To this: mail_location = mbox:~/mail:INBOX=/var/mail/%u

In this configuration dovecot will keep the mails in the old unix format not new dovecot format.
And you can reach the mails from both nodes.

Configure the rest of dovecot as usual.

In this setup you should have a shared mail system on nfs volume, so users should be able to reach their mails all the time whatever happens with the other nodes. The MX records configured to deliver mails to the second node if the first unreachable.
You need to use same unix users on both nodes otherwise the user boxes will be mixed and can’t be successful the whole setup.



VMware ESXi / VMware Player & KVM Preinstalled machines

VMware ESXi / VMware Player & KVM Preinstalled machines.       Quick link to browse download directory: 

I”ve started making preinstalled ready to use virtual machines for everybody, no charges for this.
Machines will be categorized and sorted shortly. Machines can run on Windows, Linux, Apple Server and Client.
Image formats: OVF & KVM based.

If you have not used VMware before then follow this link:  and download the Player(for client) or the ESXi(for server) whichever you want to use. The Player can be used on any running Windows/Linux/Apple client and the ESXi server can be used only as server. Both of them free, no need to pay any licence for them and no time limit restriction either.
Although on ESXi server you must upload the free ESXi license number which you got when you downloaded it in 60 day time.

You can import these machines and start using it straight away.

Debian based Named DNS master-slave server.

Master server:
 Debian-master-dns readme
Slave server: Debian-slave-dns readme

pfSense OVF template image with Squid proxy server.
pfSense server: pfSense readme

CentOS 6.5 based server with Squid proxy and Webmin server.

CentOS 6.5: 64bit 32bit readme




Dovecot POP3/IMAP server

The next article is about how to install and setup dovecot server.

Start a new terminal then install the dovecot server:

yum install dovecot

In the /etc directory edit the dovecot.conf file and add those changes as below here:

#you must add pop3 and pop3s to get these protocols work
protocols = imap imaps pop3 pop3s

#this part depend on what mail server you are using for eg.: Postfix, Sendmail
mail_location = mbox:~/mail:INBOX=/var/mail/%u

#you should add the mail group to the privileged user group otherwise dovecot wont be able to read the mailbox file
mail_privileged_group = mail

#You need to setup the uidl part otherwise the POP3 clients can’t follow of what messages they’ve downloaded from the server.
#More hints here:
pop3_uidl_format = %08Xu%08Xv

#this part need for outlook to get it work. More hints here:
imap_client_workarounds = delay-newmail outlook-idle netscape-eoh
pop3_client_workarounds = outlook-no-nuls oe-ns-eoh

#we need this part to reach the server with plain text authentication. Use basic pop3 authentication only just a secure environment! Otherwise use the secure SSL authentication.
#When you use the basic plain text authentication method, all the data travels unencrypted on your network. So the login details and the password could be catched by anyone.
#Use the encrypted SSL connection to secure the whole data travels. In the outlook thick the ” This server requires an encrypted connection(SSL) box”.
#After that the outlook will use SSL authentication method and every part of the communication will be secure.
#If you check the login details of the maillog file, you will see at the and of the line TLS
#I will show examples about this further below
disable_plaintext_auth = no

To get the SSL working you need to fill this part of the dovecot.conf:

ssl_cert_file = /etc/pki/tls/certs/dovecot.pem
ssl_key_file = /etc/pki/tls/private/dovecot.key
ssl_disable = no

Save the dovecot.conf and close it. We are set.

Start the service:

service dovecot start

Then test the pop3 server.

tail -F /var/log/maillog

This below is a basic plain text login method 110 port used:

Jan 22 00:11:04 ldapproxy dovecot: pop3-login: Login: user=<aaa>, method=PLAIN, rip=, lip=
Jan 22 00:11:04 ldapproxy dovecot: POP3(aaa): Disconnected: Logged out top=0/0, retr=0/0, del=0/0, size=0
Jan 22 00:11:05 ldapproxy sendmail[8564]: p0M0B5XT008564: from=<>, size=407,, nrcpts=1, msgid=<201101220011.p0M0B5XT008564@ldapproxy.localdomain>, proto=ESMTP, daemon=MTA, relay=[]
Jan 22 00:11:05 ldapproxy sendmail[8566]: p0M0B5XT008564: to=<>, ctladdr=<> (505/505), delay=00:00:00, xdelay=00:00:00, mailer=local, pri=30693, dsn=2.0.0, stat=Sent

and this is how the Wireshark captured the login name and the password of the whole process:


Then change the authentication method in the outlook to use the SSL. (port 995)

The maillog will look like this one:

Jan 22 00:23:38 ldapproxy dovecot: pop3-login: Login: user=<aaa>, method=PLAIN, rip=, lip=, TLS
Jan 22 00:23:38 ldapproxy dovecot: POP3(aaa): Disconnected: Logged out top=0/0, retr=0/0, del=0/0, size=0
Jan 22 00:23:38 ldapproxy sendmail[9010]: p0M0NcNf009010: from=<>, size=407,, nrcpts=1, msgid=<201101220023.p0M0NcNf009010@ldapproxy.localdomain>, proto=ESMTP, daemon=MTA, relay=[]
Jan 22 00:23:38 ldapproxy sendmail[9011]: p0M0NcNf009010: to=<>, ctladdr=<> (505/505), delay=00:00:00, xdelay=00:00:00, mailer=local, pri=30693, dsn=2.0.0, stat=Sent

Have you noticed that the TLS at the and of the line? The whole communication was encrypted!
Take a look the Wireshark’s captured data. The whole process was encrypted.


To test your dovecot server locally without any pop3 client just start telnet:

[root@ldapproxy etc]# telnet 110

Connected to (
Escape character is ‘^]’.
+OK Dovecot ready.
user aaa
pass 123456
+OK Logged in.
+OK 1 messages:
1 743

retr 1
+OK 599 octets
Return-Path: <root@ldapproxy.localdomain>
Received: from ldapproxy.localdomain (localhost.localdomain [])
by ldapproxy.localdomain (8.13.8/8.13.8) with ESMTP id p0O07gY3032579
for <aaa@ldapproxy.localdomain>; Mon, 24 Jan 2011 00:07:42 GMT
Received: (from root@localhost)
by ldapproxy.localdomain (8.13.8/8.13.8/Submit) id p0O07gRw032578
for aaa; Mon, 24 Jan 2011 00:07:42 GMT
Date: Mon, 24 Jan 2011 00:07:42 GMT
From: root <root@ldapproxy.localdomain>
Message-Id: <201101240007.p0O07gRw032578@ldapproxy.localdomain>
To: aaa@ldapproxy.localdomain
Subject: test


More references and hints here:
And troubleshoot here:

Linux/Windows Troubleshooting part 2

Network troubleshooting part 2:

The next article is about some basic DNS troubleshooting.
First we will do it on Linux with dig command, then we will check out nslookup on Windows too.

dig any @

This command will check domain at google’s DNS server(@ and will ask for all available records (any) on this domain.
I have highlighted every important parts of this command. All in all 7 records been found as you can see on this picture above:



You can change the server easily with the @ part. You can put your own DNS server if you want to check your updated local DNS server.
The fully DNS zone propagation(update) theoretically takes 2 days, but usually enough few hours to get updated nearly everywhere.
If you completely lose the @server-IP-address then dig will use the current DNS server address from /etc/resolv.conf.
For example:

dig any

To check only the MX records for the domain change the any to mx like this:

dig mx


The next one is how to check the reverse record for the domain.

dig -x

As you can see in the answer section the command found the reverse record for the domain which is


So let’s take a look at this with Windows nslookup:



set type=any



You can see that in the answer parts all the nameserver addresses and A records are there, also both MX records have been presented.
To check only MX records then you could easily change the type to mx, like this:

set type=mx

You will get only the MX records result from the server:



Windows Update troubles:

I was just updating few servers at my workplace remotely at the datacenter and 1 of them didn’t reboot properly.

So the issue was this:

– Server updated with new service packs.
– Reboot has been processed and started via RDP(remote desktop).
– The RDP can’t be reachable anymore, because that service has been shut down already and connections has been shut down.
– Server still pingable.
– No any other way to reach the server anymore. (IPMI/KVM/DRAC)


– Go to datacenter and restart the server manually. On Saturday is not a good fun, let’s be honest
– Phone up the datacenter to ask for remote hand… Takes ages to explain everything, server number, rack location etc…
– Download PsTools from here: and kill the winlogon process which stuck on the server.

Extract PsTools and first try this command:

psexec \\REMOTE_SERVER_NAME shutdown /r /t 0

This will try to execute shutdown command on the remote box and restart the server. The /r means reboot the /t switch is the time which is zero.
If this wont help for some reason then you could try to use the pskill.exe command.

pskill [-t] [\\computer [-u username [-p password]]] <process ID | name>

pskill \\ -u mydomain\Administrator -p mylovelypassword Winlogon

This should work and you wont need to go to datacenter neither to phone them up and asking for the reboot.
You can monitor the server with ping command and you will see when the server really reboots, because you will lose ping from it.

This one saved me so many times on my weekends, when I usually make Windows updates. ( Just like right now:) )
Weekdays you can’t really do Windows updates on corporate servers, because they are heavily used by users, so reboot is not a good idea that time.

Next issue will be posted shortly…

Show Buttons
Hide Buttons