Pages

Thursday, January 28, 2016

CentOS Rescue

Rebuilt the madadm.conf and re-configured grub on MBR of sda

To rebuild the mdadm.com
==> mdadm --detail --scan >> /etc/mdadm.conf

You can copy the partition layout from existing one to the newly replaced one as follows.

==> sfdisk -d /dev/sda | sfdisk /dev/sdb

Important: This will dump the partition table of sda, removing completely the existing partitions on sdb


Reconfigure Grub via rescue mode
================

Lets say the /dev/sda contains the boot partition, and sda1 mounted as /boot.

First, you have to make sure that boot flag is enabled on sda1 (There will be a * if the partition is bootable). You can mark it as bootable using 'a' with fdisk command.

#grub-install --boot-directory=/boot /dev/sda
#update-grub


You would not see any /dev/md[], /dev/sd[] or /dev/hd[] devices listed as they are not automatically mounted by the Rescue System. In some cases like MySQL crahses, you will be able to start the demand by mounting these virtual filesystems via rescue mode as follows.

mount /dev/sdb3 /mnt/sysimage
mkdir /mnt/sysimage/proc
mkdir /mnt/sysimage/sys
mkdir /mnt/sysimage/dev
mount -o bind /proc /mnt/proc
mount -o bind /sys /mnt/sys
mount -o bind /dev /mnt/dev
chroot /mnt/sysimage

Flashcahe On SSD


Git Clone Flashcache:

git clone https://github.com/facebook/flashcache.git

Install and Configure Flashcache (Note: Please remember to change the versions to match your uname -r output!)

cd flashcache/
make KERNEL_TREE=/usr/src/kernels/2.6.32-042stab075.10
make install KERNEL_TREE=/usr/src/kernels/2.6.32-042stab075.10
modprobe flashcache
Make sure its running: dmesg | tail

Making /vz flashcached:
umount /vz
Find your UUID for /vz: grep "/vz" /etc/fstab
flashcache_create -p back vz_cached /dev/sdb /dev/disk/by-uuid/replace-with-your-uuid
Comment out vz in fstab: nano /etc/fstab

Configuring Flashcache

Copy the config file: cp /tmp/flashcache/utils/flashcache /etc/init.d
Change the permissions: chmod 755 /etc/init.d/flashcache
nano /etc/init.d/flashcache

Change the following information:

SSD_DISK=/dev/sdb
BACKEND_DISK=/dev/disk/by-uuid/replace-with-your-uuid
CACHEDEV_NAME=vz_cached
MOUNTPOINT=/vz
FLASHCACHE_NAME=vz_cached
CTRL +X and save it.

Turn on Flashcache at boot: chkconfig flashcache on
Reboot the Server: reboot -n

Make sure everything looks good and run df-h.

Do a DD test to make sure flashcache is working.

First do a DD test in the root directory:
cd /
dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync; unlink test

Note the output

Now do a DD test in the /vz directory.
cd /vz
dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync; unlink test

Note awesomeness.

Everything is now setup as far as the base system with SolusVM OpenVZ and Flashcache!
---------------------------------------------------------------------------------------------------------

LVM Special Options


Create swap on a logical volume:
# lvcreate -C y -L 10G VolGroup00 -n lvolswap
Fill all the free space left on a volume group
# lvcreate -l +100%FREE VolGroup00 -n lvolmedia
Extend an existing logical volume to use the maximum space:
# lvextend -l +100%FREE VolGroup00/lvolhome

# lvextend -l +100%FREE /dev/vg0/backup -r 

Wednesday, August 19, 2015

Install RPM packages to a Crashed server + RAID (software ) in rescue mode

(1) Boot in to the rescue mode. Since the server have packages dependency problems, it will not allow you to chroot.

(2) Configure network in rescue mode.

++++++++++++++++++++++++++++++++++
#ifconfig eth0 <IP-Addr> netmask <net_mask>
#route add default gw 192.168.1.254 eth0
++++++++++++++++++++++++++++++++++

(3) Examine the RAID array using mdadm and create a temperory RIAD config file as follows.

#mdadm --examine --scan > /etc/mdadm.conf

(4) Assemble the raid array which has the broken OS installation. Here it is md4

#mdadm --assemble --scan /dev/md4

Verify the status in /proc/mdstat

(5) Dwonload the missing RPMS

(6) Mount the assembled RAID array to a temperory mount point

# mount /dev/md4 /old_drive

(7) Now, install the RPMS as follows.

#rpm -ivh --force --noscripts --root=/old_drive *.rpm

(8) Now you can unmount crashed array and reboot the server.

Friday, July 10, 2015

Perl/Calfbot (spamming script) infection

Perl/Calfbot

The presence of a /tmp/... file reveals if a server is infected and the file creation timestamp will accurately reflect the infection time. However if the server is rebooted or the C&C server sends a KILL command, the file will still be present but the malware will not be running anymore. In order to confirm an active infection, one must test the presence of a lock on /tmp/... using the following command:
flock --nb /tmp/... echo "System clean" || echo "System infected"
If one is infected, lsof can be used to see what process owns that lock:
lsof /tmp/...
The following can also validate that the targets of the /proc/$pid/exe symbolic links are the real crond:
pgrep -x "crond" | xargs -I '{}' ls -la "/proc/{}/exe"
Anything looking like "/tmp/ " (with a space) in the output is very suspicious.
pgrep requires the procps package. If you can’t install the package, replace:
pgrep -x crond
with
ps -ef | grep crond | grep -v grep | awk '{print $2}'