Pages

Saturday, October 5, 2013

Cloud Linux: Tips




Conversion of CentOS5 or Centos6 to CloudLinux
---------------------------------------------------
$ wget http://repo.cloudlinux.com/cloudlinux/sources/cln/cldeploy
$ sh cldeploy -k <activation_key> # if you have activation key
or
$ sh cldeploy -i # if you have IP based license
$ reboot

CageFS Installation
-------------------------
$ yum install cagefs
$ /usr/sbin/cagefsctl –init

If you don't have enough disk space in /usr/share, use following commands to have cagefs-skeleton being placed in a different location:

$ mkdir /home/cagefs-skeleton
$ ln -s /home/cagefs-skeleton /usr/share/cagefs-skeleton


To enable all user in cageFS

 $/usr/sbin/cagefsctl –enable-all


PHP Selector Installation
---------------------------------
Installation of different versions of PHP & modules:
$ yum groupinstall alt-php

Update CageFS & LVE Manager with support for PHP Alternatives
$ yum update cagefs lvemanager

cPanel/WHM: Make sure 'Select PHP version' is enabled in Feature Manager


Add an rpm/command to CageFS
---------------------------------
/usr/sbin/cagefsctl --addrpm rsync
cagefsctl --force-update

List the available roms in cageFS
------------------------------------
/usr/sbin/cagefsctl --list-rpm


Set to unlimitted: LVE Limits
-------------------------------------
CLoudLinux 6
--------------------
You can set default LVE limits to unlimited, i.e.
#lvectl set default --cpu=100 --ncpu=100 --vmem=0 --pmem=0 --nproc=0 --maxEntryProcs=10000 --io=0
#lvectl apply all

CloudLinux 5
-------------------
pmem and io limits are not available on CL5, please use it this way:
#lvectl set default --cpu=100 --ncpu=100 --vmem=0 --maxEntryProcs=10000
#lvectl apply all

Mounting Software RAID1 member: madadm

(1)I connected my old hard drive and realized that it was RAID member:



#fdisk -l /dev/sdd
Disk /dev/sdd: 250.1 GB, 250058268160 bytes
255 heads, 63 sectors/track, 30401 cylinders, total 488395055 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x90909090

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1   *        2048     2099199     1048576   fd  Linux raid autodetect
/dev/sdd2         2099200     6293503     2097152   82  Linux swap / Solaris
/dev/sdd3         6293504    69208063    31457280   fd  Linux raid autodetect
/dev/sdd4        69208064   488394751   209593344   fd  Linux raid autodetect

You cannot mount it as a normal partition


#mkdir /mnt/old_hdd 
#mount /dev/sdd4 /mnt/old_hdd 
mount: unknown filesystem type 'linux_raid_member'

(2) If you are using RAID1 array, you can mount using madadm


#mdadm --examine /dev/sdd4
/dev/sdd4:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 957e7cb5:bfd41f70:9cb84b0d:f53e5a4c
           Name : milosz-desktop:2
  Creation Time : Sat Aug 20 18:48:26 2011
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 419184640 (199.88 GiB 214.62 GB)
     Array Size : 419184496 (199.88 GiB 214.62 GB)
  Used Dev Size : 419184496 (199.88 GiB 214.62 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : db8a694f:750a0ded:22a6d046:5c4db280

    Update Time : Tue May  8 20:50:32 2012
       Checksum : 75dbc3b6 - correct
         Events : 191


   Device Role : Active device 1
   Array State : .A ('A' == active, '.' == missing)


In order to mount it properly, you will have to create a md virtual device with mdadm.


#mdadm -A -R /dev/md9 /dev/sdd4
mdadm: /dev/md9 has been started with 1 drive (out of 2).

Now you can mount /dev/md9 without any problem.


#mount /dev/md9 /mnt/old_hdd/
#mount | grep ^/dev/md9
/dev/md9 on /mnt/old_hdd type ext4 (rw)

You will be able to copy the data to another drive now. Once the data is transferred, you can unmount it as follows 


#umount /mnt/old_hdd 
#mdadm -S /dev/md9
mdadm: stopped /dev/md9








Software RAID: Fail and replace


REPLACE DRIVE IN SOFTWARE RAID
================================

[~]# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sdb2[1] sda2[0]
      2096120 blocks super 1.1 [2/2] [UU]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md0 : active raid1 sdb3[1] sda3[0]
      102388 blocks super 1.0 [2/2] [UU]
     
md3 : active raid1 sda4[0] sdb4[1]
      481990524 blocks super 1.1 [2/2] [UU]
      bitmap: 4/4 pages [16KB], 65536KB chunk

md1 : active raid1 sda1[0] sdb1[1]
      4193272 blocks super 1.1 [2/2] [UU]
---------------------------------------------

Fail and remove the partition from raid

mdadm --fail /dev/md0 /dev/sda3 // this will fail sda3 in md0
mdadm --remove /dev/md0 /dev/sda3 // will remove the sda3 from md0

mdadm --fail /dev/md1 /dev/sda1
mdadm --remove /dev/md1 /dev/sda1

mdadm --fail /dev/md2 /dev/sda2
mdadm --remove /dev/md2 /dev/sda2

mdadm --fail /dev/md3 /dev/sda4
mdadm --remove /dev/md3 /dev/sda4

---------------------------------------------------------
The mdstat will be similar to the following one

[~]# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sdb2[1]
      2096120 blocks super 1.1 [2/1] [_U]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md0 : active raid1 sdb3[1]
      102388 blocks super 1.0 [2/1] [_U]
     
md3 : active raid1 sdb4[1]
      481990524 blocks super 1.1 [2/1] [_U]
      bitmap: 4/4 pages [16KB], 65536KB chunk

md1 : active raid1 sdb1[1]
      4193272 blocks super 1.1 [2/1] [_U]
     
unused devices: <none
---------------------------------------------------------


(2) Replicate partition table to the new drive say 'sda'(if the new disk may be different in real time)

sfdisk -d /dev/sdb | sfdisk /dev/sda



If the partition is GPT Follow the steps below

(!) Install "sgdisk"

#yum install

(!!)use sgdisk to clone the partition table from /dev/sdb(here sda is new) to the other two hard drives
-----------------------------------------
#sgdisk --backup=table /dev/sdb
#sgdisk --load-backup=table /dev/sda
#sgdisk -G /dev/sda
-----------------------------------------

(3) Add new partion to raid accordingly

mdadm --add /dev/md0 /dev/sda3
mdadm --add /dev/md1 /dev/sda1
mdadm --add /dev/md2 /dev/sda2
mdadm --add /dev/md3 /dev/sda4

(4) Raid resync will start automatically. Check status in /proc/mdstat 

To see your Linux kernel speed limits imposed on the RAID reconstruction use:


cat /proc/sys/dev/raid/speed_limit_max
200000
cat /proc/sys/dev/raid/speed_limit_min
1000

To increase the speed:

echo 50000 >/proc/sys/dev/raid/speed_limit_min
echo 50000 >/proc/sys/dev/raid/speed_limit_max


MySQL Backup scripts

Backup all databases in .gz format to location.
----------------------------------------------------------------------

for db in `echo 'show databases;' | mysql | grep -v Database `; do mysqldump $db | gzip > /<backup_dir>/$db.sql.gz ; done

Backup all databases in .gz format to a remote location
-----------------------------------------------------------------------------------
for db in `echo 'show databases;' | mysql | grep -v ^Database ` ; do mysqldump --opt --single-transaction --quick $db | gzip -9 | ssh user@<Remote_IP_address> "cat > /home/<username>/<back_dir>/$db.sql.gz" ; done

NOTE: Make sure that key authentication is enabled b/w the servers.