[root@localhost ]# dmesg -->system error output
[root@localhost ]# cat /var/log/message -->system message output
[root@localhost ]# df -h -->list filesystems details
[root@localhost ]# top -->realtime system statistics
[root@localhost ]# uname -a -->check system details
[root@localhost ]# ifconfig -a -->check communication interface details
[root@localhost ]# init 0 -->command for shutdown now
[root@localhost ]# init 6 --> command for restart now
nuffnang
Friday, 27 May 2011
Distributed Replicated Device Block (DRBD)
What do you need before starting DRBD installation:
(1) Make sure your server can resolve both node name (/etc/hosts)
(2)Minimum for DRBD is RHEL 5/CentOS 5
(3) Files needed: kmod-drbd82-8.2.6-2.i686.rpm
drbd82-8.2.6-1.el5.centos.i386.rpm
(4) Firewall: open port 7788
Current situation :
• node1.yourdomain.org 172.29.156.20/24 , source disc /dev/sdc that will be replicated
• node2.yourdomain.org 172.29.156.21/24 , target disc /dev/sdc
1.install kmod and follow by drbd.
2.check configuration at /etc/drbd.conf, should look like this:
#
# please have a a look at the example configuration file in
# /usr/share/doc/drbd/drbd.conf
#
global { usage-count no; }
resource repdata {
protocol C;
syncer { rate 10M; }
on node1.yourdomain.org {
device /dev/drbd0;
disk /dev/sdc;
address 172.29.156.20:7788;
meta-disk internal;
}
on node2.yourdomain.org {
device /dev/drbd0;
disk /dev/sdc;
address 172.29.156.21:7788;
meta-disk internal;
}
}
Replicate drbd.conf to second node at /etc/drbd.conf
3.create disk metadata:
[root@node1 etc]# drbdadm create-md repdata
v08 Magic number not found
v07 Magic number not found
About to create a new drbd meta data block on /dev/sdc.
. ==> This might destroy existing data! <==
Do you want to proceed? [need to type 'yes' to confirm] yes
Creating meta data... initialising activity log NOT initialized bitmap (256 KB) New
drbd meta data block sucessfully created
4. start drbd at both nodes
[root@node1 etc]# service drbd start
Starting DRBD resources: [ d0 n0 ]. ......
[root@node1 etc]# cat /proc/drbd
version: 8.0.4 (api:86/proto:86) SVN Revision: 2947 build by buildsvn@c5-i386-
build, 2007-07-31 19:17:18
. 0: cs:Connected st:Secondary/Secondary ds:Inconsistent/Inconsistent C r---
. ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0
. resync: used:0/31 hits:0 misses:0 starving:0 dirty:0 changed:0 act_log:
used:0/257 hits:0 misses:0 starving:0 dirty:0 changed:0
5.start full sync of drbd disks
[root@node1 etc]# drbdadm -- --overwrite-data-of-peer primary repdata
[root@node1 etc]# watch -n 1 cat /proc/drbd version: 8.0.4 (api:86/proto:86) SVN
Revision: 2947 build by buildsvn@c5-i386-build, 2007-07-31 19:17:18
. 0: cs:SyncTarget st:Primary/Secondary ds:Inconsistent/Inconsistent C r---
. ns:0 nr:68608 dw:68608 dr:0 al:0 bm:4 lo:0 pe:0 ua:0 ap:0
. [>...................] sync'ed: 0.9% (8124/8191)M finish: 0:12:05 speed:
11,432 (11,432) K/sec resync: used:0/31 hits:4283 misses:5 starving:0 dirty:0
changed:5 act_log: used:0/257 hits:0 misses:0 starving:0 dirty:0 changed:0
Check for sync progress
6.Now we can format the disks and mount the fs as we wanted.
[root@node1 ]mkfs.ext3 /dev/drbd0 ; mkdir /repdata ; mount /dev/drbd0 /repdata
7. we can test by creating fake data:
[root@node1 etc]# for i in {1..5};do dd if=/dev/zero of=/repdat
a/file$i bs=1M count=100;done
8.Check for replicated data
[root@node1 /]# umount /repdata ; drbdadm secondary repdata
[root@node2 /]# mkdir /repdata ; drbdadm primary repdata ; mount /dev/drbd0
/repdata
[root@node2 /]# ls /repdata/
file1 file2 file3 file4 file5 lost+found
Now DRBD are go....
(1) Make sure your server can resolve both node name (/etc/hosts)
(2)Minimum for DRBD is RHEL 5/CentOS 5
(3) Files needed: kmod-drbd82-8.2.6-2.i686.rpm
drbd82-8.2.6-1.el5.centos.i386.rpm
(4) Firewall: open port 7788
Current situation :
• node1.yourdomain.org 172.29.156.20/24 , source disc /dev/sdc that will be replicated
• node2.yourdomain.org 172.29.156.21/24 , target disc /dev/sdc
1.install kmod and follow by drbd.
2.check configuration at /etc/drbd.conf, should look like this:
#
# please have a a look at the example configuration file in
# /usr/share/doc/drbd/drbd.conf
#
global { usage-count no; }
resource repdata {
protocol C;
syncer { rate 10M; }
on node1.yourdomain.org {
device /dev/drbd0;
disk /dev/sdc;
address 172.29.156.20:7788;
meta-disk internal;
}
on node2.yourdomain.org {
device /dev/drbd0;
disk /dev/sdc;
address 172.29.156.21:7788;
meta-disk internal;
}
}
Replicate drbd.conf to second node at /etc/drbd.conf
3.create disk metadata:
[root@node1 etc]# drbdadm create-md repdata
v08 Magic number not found
v07 Magic number not found
About to create a new drbd meta data block on /dev/sdc.
. ==> This might destroy existing data! <==
Do you want to proceed? [need to type 'yes' to confirm] yes
Creating meta data... initialising activity log NOT initialized bitmap (256 KB) New
drbd meta data block sucessfully created
4. start drbd at both nodes
[root@node1 etc]# service drbd start
Starting DRBD resources: [ d0 n0 ]. ......
[root@node1 etc]# cat /proc/drbd
version: 8.0.4 (api:86/proto:86) SVN Revision: 2947 build by buildsvn@c5-i386-
build, 2007-07-31 19:17:18
. 0: cs:Connected st:Secondary/Secondary ds:Inconsistent/Inconsistent C r---
. ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0
. resync: used:0/31 hits:0 misses:0 starving:0 dirty:0 changed:0 act_log:
used:0/257 hits:0 misses:0 starving:0 dirty:0 changed:0
5.start full sync of drbd disks
[root@node1 etc]# drbdadm -- --overwrite-data-of-peer primary repdata
[root@node1 etc]# watch -n 1 cat /proc/drbd version: 8.0.4 (api:86/proto:86) SVN
Revision: 2947 build by buildsvn@c5-i386-build, 2007-07-31 19:17:18
. 0: cs:SyncTarget st:Primary/Secondary ds:Inconsistent/Inconsistent C r---
. ns:0 nr:68608 dw:68608 dr:0 al:0 bm:4 lo:0 pe:0 ua:0 ap:0
. [>...................] sync'ed: 0.9% (8124/8191)M finish: 0:12:05 speed:
11,432 (11,432) K/sec resync: used:0/31 hits:4283 misses:5 starving:0 dirty:0
changed:5 act_log: used:0/257 hits:0 misses:0 starving:0 dirty:0 changed:0
Check for sync progress
6.Now we can format the disks and mount the fs as we wanted.
[root@node1 ]mkfs.ext3 /dev/drbd0 ; mkdir /repdata ; mount /dev/drbd0 /repdata
7. we can test by creating fake data:
[root@node1 etc]# for i in {1..5};do dd if=/dev/zero of=/repdat
a/file$i bs=1M count=100;done
8.Check for replicated data
[root@node1 /]# umount /repdata ; drbdadm secondary repdata
[root@node2 /]# mkdir /repdata ; drbdadm primary repdata ; mount /dev/drbd0
/repdata
[root@node2 /]# ls /repdata/
file1 file2 file3 file4 file5 lost+found
Now DRBD are go....
Thursday, 26 May 2011
Linux HA (High Availability)
What is High Availabilty??
from wikipedia-->High availability is a system design approach and associated service implementation that ensures a prearranged level of operational performance will be met during a contractual measurement period.
from me --> to ensure server uptime and availability to 99.9999% or in other words 30 seconds down time per year! How to achieve that availabilty? yes, using HA.
HA using failover mechanism to ensure availabilty. Refer diagram belows (credit to Alan Robertson HA Linux Projects):
Diagram shows physical failover mechanism with Disaster Recovery Center (DRC) and using replication method to mirroring storage. I will show how to use DRBD (Distributed Replicated Block Device) that we can understood by network raid level 1 later on.
First thing what we need is rpm installer, although somebody will use compiling from source code method, but im tired of compiling dependencies issues so just use rpm.
For RHEL you can use centos rpm found here:Centos i386 RPM repos,
Get
1.heartbeat-2.1.3-3.el5.centos.i386.rpm
2.heartbeat-pils-2.1.3-3.el5.centos.i386.rpm
3.heartbeat-stonith-2.1.3-3.el5.centos.i386.rpm
Installation(both servers):
1.put those three files in your RHEL folder (any folder that you comfort with).
2. run heartbeat pils follow by stonith and heartbeat 2.1.3
root@localhost#rpm -ivf heartbeat-pils-2.1.3-3.el5.centos.i386.rpm
root@localhost#rpm -ivf heartbeat-stonith-2.1.3-3.el5.centos.i386.rpm
root@localhost#rpm -ivf heartbeat-2.1.3-3.el5.centos.i386.rpm
Configuring HA:
There are 3 files needed for HA to run:
1.authkeys
2.ha.cf
3.haresources
These 3 files should located at /etc/ha.d
in authkeys files:
auth 2
2 sha1 yourpassword
in ha.cf:
logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 30
initdead 120
bcast eth0
udpport 694
auto_failback on
node node01(your server hostname)
node node02 (your backup hostname)
in haresources (this is what to be failover):
hostname ipaddress httpd(depends on your failover service requirement)
both server must have same configuration files.
try to start your heartbeat:
root@localhost#service heartbeat start
Stay tune for DRBD!
any enquiries please put comments...
Handshake Failed. SSL0234W Error from WebSphere v5/6
SSL0234W: SSL Handshake Failed, The certificate sent by the peer has expired or is invalid.
First symptom of this issue is user cant access with secure connection
(port 443, SSL).
Try to check error log from apache , usually by default(IBM HTTP) :
root@localhost# cd /opt/IBMHTTPServer/logs/dailylogs
Let consider below situation:
OS=AIX 5.3
Apps=IBM WebSphere v5
HTTP=IBM HTTPServer 1.3
CA=DigiCert Malaysia
CA Root=Malaysia Premier CA 1024(MPCA1024)
On 1st May 2011, CA root will expired and has to be replace by new CA Root
MPCA1024.
After replaced new CA Root user no longer can access the system and
re-bounce error SSL0234W. This is high possibility on new CA Root
having error, even you can view the certificate correctly using cert viewer.
When this problem happens you have to consult the CA to check the CA Root.
As we know creating a new CA Root will be a lot of works to be done.
This issue just happened to my system, and the issue is the CA Root itself.
Welcome
Finally I have time to write this blog and sharing any knowledge on IT system engineering and maintenance. Hopefully this blog can gather us together to share knowledge together.
Subscribe to:
Posts (Atom)