Saturday, October 22, 2016

Oracle: RMAN Backup & Recovery

1. Check if the database is in archive mode:


Archive mode is disabled.


(2): Set the database is in archive mode:

Premise set up the database in archive mode of the database in mount state, restart the database to mount state:





In the mount state, the use alter database statement archive mode enabled:




2. Define the flash recovery area (flash recovery area):

Flashback recovery area, mainly through the following three initialization parameters to set up and manage:

db_recovery_file_dest : Specifies the position Flashback recovery area
db_recovery_file_dest_size : Specifies flash recovery area free space
db_flashback_retention_target : Specifies that the database can be rolled back in time, in                                                                         minutes,
 The default 1440 minutes, which is one day. Of course, actually be rolled back in time also decided to return the size of the flash recovery area, because it saved the rollback required flash log. So this parameter to modify and db_recovery_file_dest_size fit.

(1) view the current location and the size of the flash recovery area:



(2), setting the flashback recovery area location and size:



Before setting up the flash recovery area location, determine the directory has been created,

or in the process of modification will be reported the following error:



(3) To view the results of the above settings:



(4) If you want to view the flash recovery area space usage, you can view the following views:



(5) If you want to free up more space, you can use the following command to delete all archived log files to free up space:



3. Set the archive logs multipath storage:
(1) view the current configuration information log_archive_dest:



(2), Define a new location:



The first command uses the flash recovery area, the second using a custom location.
(3) to see if the definition of success:




4, on the control file control_file_record_keep_time parameters:
This parameter specify the minimum days the RMAN information is stored in the control file before overwritten. The default value is 7 days. When using catalog, a smaller value should be chosen.
See the value of this parameter:




5, rman use:
(1) Enable rman:




Which means a non-catalog 1,2,3,5,6 connection section 4 is connected to the catalog, db02 is the target database, db01 directory database.
(2) to display the rman configuration information:


(3) Configured rman:
Enable automatic backup control file:



Enable backup set compression:



Define retention policies:
The default retention value is 1.



(4), Restore the configuration defaults rman:



6, The backup command backup:
If the direct use backup database, the whole preparation, this method can not be incremental and differential backups.



Level 0 backup performed, this can be incremental and differential backups:



Level 1 backup performed:



The level at which the cumulative (cumulative) backup 1:

           

Delete all archive logs after the backup is complete:



7, List the command:



                               
8, Report command:



9, Delete the command:


                               


10, Related to the several views rman:


V$ARCHIVED_LOG
V$BACKUP_CORRUPTION
V$BACKUP_DEVICE
V$BACKUP_FILES
V$BACKUP_PIECE
V$BACKUP_REDOLOG
V$BACKUP_SET
V$BACKUP_SPFILE
V$COPY_CORRUPTION
V$RMAN_CONFIGURATION





Friday, October 21, 2016

How to open without RESETLOGS after restore offline/cold backup


RESETLOGS | NORESETLOGS 

This clause determines whether Oracle Database resets the current log sequence number to 1, archives any unarchived logs (including the current log), and discards any redo information that was not applied during recovery, ensuring that it will never be applied. Oracle Database uses NORESETLOGS automatically except in the following specific situations, which require a setting for this clause: 

You must specify RESETLOGS: 

1.After performing incomplete media recovery or media recovery using a backup controlfile 
2.After a previous OPEN RESETLOGS operation that did not complete 
3.After a FLASHBACK DATABASE operation 

If a created controlfile is mounted, then you must specify RESETLOGS if the online logs are lost, or you must specify  NORESETLOGS if they are not lost. So, the point is that if we restored the controlfile backup, we have to do OPEN RESETLOGS because the controlfile type will be BACKUP controlfile. Moreover, online redo logs backups should also be available as RMAN doesn't backup online redo logs. 

In order to avoid RESETLOGS, the following steps can be followed: 

  • In addition to take cold backup of database, also take backup of online redo logs
  • Restore controlfile and database cold backup :
     RMAN> restore controlfile...... 
     RMAN> mount database ; 
     RMAN> restore database ; 
  • Copy the online redo logs to the desired location for new database.
  • Login to SQL*Plus and generate controlfile trace script ( please note that the database is mounted from rman after restoring controlfile ) :
     SQL> alter database backup controlfile to trace NORESETLOGS as '/tmp/ctl.sql' ; 
     SQL> SHUTDOWN IMMEDIATE 
  • Edit the controlfile if required. For example, to change the location of online redo logs copied.
  • Shutdown and STARTUP NOMOUNT the database and run the create controlfile script :
     SQL> STARTUP NOMOUNT 
     SQL> @/tmp/ctl.sql 
  •  Recover the database and open normal :
     SQL> RECOVER DATABASE ; 
     SQL> ALTER DATABASE OPEN ;

Building Oracle HA With PowerHA 6.1 On AIX 6.1

Configure IBM PowerHA on aix.



The en0 just for boot ip and the en1 just for standby ip.
一.Requirements
1.Append follwoing lines to /etc/hosts on all of nodes.
  1. #For Boot IP 
  2. 172.16.255.11   xxserv1 
  3. 172.16.255.13   xxserv2 
  4. #For Standby IP 
  5. 192.168.0.11    xxserv1-stby 
  6. 192.168.0.13    xxserv2-stby 
  7. #For Service IP 
  8. 172.16.255.15   xxserv1-serv 
  9. 172.16.255.17   xxserv2-serv 
  10. #For Persistent IP 
  11. 192.168.2.11    xxserv1-pers 
  12. 192.168.2.13    xxserv2-pers 
2.Ensure that the following aix filesets are installed:
  1. [root@xxserv1 /]#lslpp -l bos.data bos.adt.lib bos.adt.libm bos.adt.syscalls bos.net.tcp.client bos.net.tcp.server bos.rte.SRC bos.rte.libc bos.rte.libcfg bos.rte.libpthreads bos.rte.odm bos.rte.lvm bos.clvm.enh bos.adt.base bos.perf.perfstat bos.perf.libperfstat bos.perf.proctools rsct.basic.rte rsct.compat.clients.rte xlC.aix61.rte  
3.Install PowerHA on all of nodes:
  1. [root@xxserv1 /]#loopmount -i powerHA_v6.1.iso -o "-V cdrfs -o ro" -m /mnt 
  2. [root@xxserv1 /]#installp -a -d /mnt all 
After installation,keep the PowerHA up to date and reboot all of nodes.
4.Append boot ip and standby ip to /usr/es/sbin/cluster/etc/rhosts
  1. [root@xxserv1 etc]#cat rhosts  
  2. 172.16.255.11    
  3. 172.16.255.13   
  4. 192.168.0.11     
  5. 192.168.0.13 
  6. [root@xxserv2 etc]#cat rhosts  
  7. 172.16.255.11    
  8. 172.16.255.13   
  9. 192.168.0.11     
  10. 192.168.0.13 
5.Edit /usr/es/sbin/cluster/netmon.cf file.Append each boot ip and standby ip to it on each node.
  1. [root@xxserv1 cluster]#cat netmon.cf  
  2. 172.16.255.11 
  3. 192.168.0.11  
  4. [root@xxserv2 cluster]#cat netmon.cf  
  5. 172.16.255.13 
  6. 192.168.0.13  
6.Create a disk heartbeat:
  1. //Create heartvg on xxserv1 
  2. [root@xxserv1 /]#mkvg -x -y heartvg -C hdisk5 
  3. [root@xxserv1 /]#lspv|grep hdisk5  
  4. hdisk5          000c1acf7ca3bc3b                    heartvg    
  5. //import heartvg on xxserv2 
  6. [root@xxserv2 /]#importvg –y heartvg hdisk5 
  7. [root@xxserv2 /]#lspv|grep hdisk5                 
  8. hdisk5          000c1acf7ca3bc3b                    heartvg 
Test the disk heartbeat:
  1. //Running following command on xxserv1 
  2. [root@xxserv1 /]#/usr/sbin/rsct/bin/dhb_read -p hdisk5 -r  
  3. DHB CLASSIC MODE  
  4. First node byte offset: 61440  
  5. Second node byte offset: 62976  
  6. Handshaking byte offset: 65024  
  7.        Test byte offset: 64512 
  8.  
  9. Receive Mode:  
  10. Waiting for response . . .  
  11. Magic number = 0x87654321  
  12. Magic number = 0x87654321  
  13. Magic number = 0x87654321  
  14. Link operating normally 
  15.  
  16. //Running following command on xxserv2 
  17. [root@xxserv2 /]#/usr/sbin/rsct/bin/dhb_read -p hdisk5 -t  
  18. DHB CLASSIC MODE  
  19. First node byte offset: 61440  
  20. Second node byte offset: 62976  
  21. Handshaking byte offset: 65024  
  22.        Test byte offset: 64512 
  23.  
  24. Transmit Mode:  
  25. Magic number = 0x87654321  
  26. Detected remote utility in receive mode.  Waiting for response . . .  
  27. Magic number = 0x87654321  
  28. Magic number = 0x87654321  
  29. Link operating normally 
7.Create a Share Volume Group:
  1. //On xxserv1 
  2. [root@xxserv1 /]#mkvg -V 48 -y oradata hdisk6 hdisk7  
  3. 0516-1254 mkvg: Changing the PVID in the ODM.  
  4. 0516-1254 mkvg: Changing the PVID in the ODM.  
  5. oradata 
  6. [root@xxserv1 /]#mklv -y lv02 -t jfs2 oradata 20G  
  7. lv02  
  8. [root@xxserv1 /]#crfs -v jfs2 -d /dev/lv02 –m /oradata  
  9. File system created successfully.  
  10. 20970676 kilobytes total disk space.  
  11. New File System size is 41943040  
  12. [root@xxserv1 /]#chvg -an oradata 
  13. [root@xxserv1 /]#varyoffvg oradata 
  14. [root@xxserv1 /]#exportvg oradata 
  15.  
  16. //On xxserv2 import oradata volume group 
  17. [root@xxserv2 /]#importvg -V 48 -y oradata hdisk6  
  18. oradata  
  19. [root@xxserv2 /]#lspv  
  20. hdisk0          000c18cf00094faa                    rootvg          active               
  21. hdisk1          000c18cf003ca02c                    None                                 
  22. hdisk2          000c1acf3e6440c6                    None                                 
  23. hdisk3          000c1acf3e645312                    None                                 
  24. hdisk4          000c1acf3e6460d9                    None                                 
  25. hdisk5          000c1acf7ca3bc3b                    heartvg                              
  26. hdisk6          000c1acf7cb764d9                    oradata         active               
  27. hdisk7          000c1acf7cb765aa                    oradata         active 
8.For oracle do following steps.
(1).Check following softwares:
  1. [root@xxserv2 /]#lslpp -l bos.adt.base bos.adt.lib bos.adt.libm bos.perf.perfstat bos.perf.libperfstat bos.perf.proctools xlC.aix61.rte 
(2).Change following parameters:
  1. [root@xxserv1 /]#no -p -o tcp_ephemeral_low=9000 
  2. [root@xxserv1 /]#no -p -o tcp_ephemeral_high=65500 
  3. [root@xxserv1 /]#no -p -o udp_ephemeral_low=9000 
  4. [root@xxserv1 /]#no -p -o udp_ephemeral_high=65500 
  5. [root@xxserv2 /]#no -p -o tcp_ephemeral_low=9000 
  6. [root@xxserv2 /]#no -p -o tcp_ephemeral_high=65500 
  7. [root@xxserv2 /]#no -p -o udp_ephemeral_low=9000 
  8. [root@xxserv2 /]#no -p -o udp_ephemeral_high=65500 
(3).Create oracle user and groups:
  1. //On xxserv1 
  2. [root@xxserv1 /]#for id in oinstall dba oper;do mkgroup $id;done 
  3. [root@xxserv1 /]#mkuser oracle;passwd oracle
  4. [root@xxserv1 /]#chuser pgrp=oinstall oracle 
  5. [root@xxserv1 /]#chuser groups=oinstall,dba,oper oracle 
  6. [root@xxserv1 /]#chuser fsize=-1 oracle 
  7. [root@xxserv1 /]#chuser data=-1 oracle 
  8. //On xxserv2 
  9. [root@xxserv2 /]#for id in oinstall dba oper;do mkgroup $id;done 
  10. [root@xxserv2 /]#mkuser oracle;passwd oracle
  11. [root@xxserv2 /]#chuser pgrp=oinstall oracle 
  12. [root@xxserv2 /]#chuser groups=oinstall,dba,oper oracle 
  13. [root@xxserv2 /]#chuser fsize=-1 oracle 
  14. [root@xxserv2 /]#chuser data=-1 oracle 
(4).Change maxuprocs parameter:
  1. [root@xxserv1 /]#chdev -l sys0 -a maxuproc=16384 
  2. sys0 changed 
  3. [root@xxserv2 /]#chdev -l sys0 -a maxuproc=16384 
  4. sys0 changed 
(5).Create Oracle home:
  1. [root@xxserv1 /]#mkdir /u01;chown oracle:oinstall /u01;su - oracle 
  2. [root@xxserv1 /]$vi .profile 
  3. export ORACLE_SID=example 
  4. export ORACLE_BASE=/u01/app/oracle 
  5. export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1 
  6. export NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS" 
  7. export TNS_ADMIN=$ORACLE_HOME/network/admin 
  8. export ORA_NLS11=$ORACLE_HOME/nls/data 
  9. export PATH=$PATH:$HOME/bin:$ORACLE_HOME/bin:$JAVA_HOME/bin 
  10. export LD_LIBRARY_PATH=$ORACLE_HOME/lib:${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib:/lib:/usr/lib:/usr/local/lib 
  11. export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib 
  12. export THREADS_FLAG=native 
  13. [root@xxserv1 /]$source .profile;mkdir -p $ORACLE_HOME 
  14.  
  15. [root@xxserv2 /]#mkdir /u01;chown oracle:oinstall /u01;su - oracle 
  16. [root@xxserv2 /]$vi .profile 
  17. export ORACLE_SID=example 
  18. export ORACLE_BASE=/u01/app/oracle 
  19. export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1 
  20. export NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS" 
  21. export TNS_ADMIN=$ORACLE_HOME/network/admin 
  22. export ORA_NLS11=$ORACLE_HOME/nls/data 
  23. export PATH=$PATH:$HOME/bin:$ORACLE_HOME/bin:$JAVA_HOME/bin 
  24. export LD_LIBRARY_PATH=$ORACLE_HOME/lib:${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib:/lib:/usr/lib:/usr/local/lib 
  25. export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib 
  26. export THREADS_FLAG=native 
  27. [root@xxserv2 /]$source .profile;mkdir -p $ORACLE_HOME 
(6).Ensure that the /tmp filesystem has enough space:
  1. [root@xxserv1 /]#chfs -a size=+1G /tmp 
二.Create a cluster:
1.Add a cluster:
  1. [root@xxserv1 /]#/usr/es/sbin/cluster/utilities/claddclstr -n hatest  
  2. Current cluster configuration: 
  3.  
  4. Cluster Name: hatest  
  5. Cluster Connection Authentication Mode: Standard  
  6. Cluster Message Authentication Mode: None  
  7. Cluster Message Encryption: None  
  8. Use Persistent Labels for Communication: No  
  9. There are 0 node(s) and 0 network(s) defined 
  10.  
  11. No resource groups defined 
2.Add nodes to cluster:
  1. [root@xxserv1 /]#/usr/es/sbin/cluster/utilities/clnodename -a xxserv1  -p xxserv1  
  2. [root@xxserv1 /]#/usr/es/sbin/cluster/utilities/clnodename -a xxserv2  -p xxserv2 
  3. [root@xxserv1 /]#/usr/es/sbin/cluster/utilities/clnodename 
  4. xxserv1 
  5. xxserv2 
3. Configure HACMP diskhb network:
  1. [root@xxserv1 /]#/usr/es/sbin/cluster/utilities/clmodnetwork -a -l no -n net_diskhb_01 -i diskhb 
  2. [root@xxserv1 /]#/usr/es/sbin/cluster/cspoc/cl_ls2ndhbnets  
  3. Network Name   Node and Disk List  
  4. ============   ==================      ==================  
  5. net_diskhb_01           
4.Configure HACMP Communication devices:
  1. [root@xxserv1 /]#/usr/es/sbin/cluster/utilities/claddnode -a diskhb_xxserv1:diskhb:net_diskhb_01:serial:service:/dev/hdisk5 -n xxserv1 
  2. [root@xxserv1 /]#/usr/es/sbin/cluster/utilities/claddnode -a diskhb_xxserv2:diskhb:net_diskhb_01:serial:service:/dev/hdisk5 -n xxserv2 
  3. [root@xxserv1 /]#/usr/es/sbin/cluster/cspoc/cl_ls2ndhbnets  
  4. Network Name   Node and Disk List  
  5. ============   ==================      ==================  
  6. net_diskhb_01  xxserv1:/dev/hdisk5     xxserv2:/dev/hdisk5 
Test diskhb nerwork:
  1. [root@xxserv1 /]#/usr/es/sbin/cluster/sbin/cl_tst_2ndhbnet -cspoc -n'xxserv1,xxserv2' '/dev/hdisk5' 'xxserv1' '/dev/hdisk5' 'xxserv2'  
  2. cl_tst_2ndhbnet: Starting the receive side of the test for disk /dev/hdisk5 on node xxserv1  
  3. cl_tst_2ndhbnet: Starting the transmit side of the test for disk /dev/hdisk5 on node xxserv2  
  4. xxserv1: DHB CLASSIC MODE  
  5. xxserv1:  First node byte offset: 61440  
  6. xxserv1: Second node byte offset: 62976  
  7. xxserv1: Handshaking byte offset: 65024  
  8. xxserv1:        Test byte offset: 64512  
  9. xxserv1:  
  10. xxserv1: Receive Mode:  
  11. xxserv1: Waiting for response . . .  
  12. xxserv1: Magic number = 0x87654321  
  13. xxserv1: Magic number = 0x87654321  
  14. xxserv1: Magic number = 0x87654321  
  15. xxserv1: Link operating normally  
  16. xxserv2: DHB CLASSIC MODE  
  17. xxserv2:  First node byte offset: 61440  
  18. xxserv2: Second node byte offset: 62976  
  19. xxserv2: Handshaking byte offset: 65024  
  20. xxserv2:        Test byte offset: 64512  
  21. xxserv2:  
  22. xxserv2: Transmit Mode:  
  23. xxserv2: Magic number = 0x87654321  
  24. xxserv2: Detected remote utility in receive mode.  Waiting for response . . .  
  25. xxserv2: Magic number = 0x87654321  
  26. xxserv2: Magic number = 0x87654321  
  27. xxserv2: Link operating normally  
  28. cl_tst_2ndhbnet: Test complete 
5.Configure HACMP IP-Based Network:
  1. [root@xxserv1 /]#/usr/es/sbin/cluster/utilities/clmodnetwork -a –n net_ether_02 –i ether –s 255.255.255.0 –l yes 
  2. [root@xxserv1 /]#/usr/es/sbin/cluster/utilities/clmodnetwork -a -n net_ether_03 -i ether -s 255.255.255.0 -l yes 
6.Add Communication Interfaces:
  1. [root@xxserv1 /]#/usr/es/sbin/cluster/utilities/claddnode -a'xxserv1' :'ether' :'net_ether_02' : : : -n'xxserv1'
  2. [root@xxserv1 /]#/usr/es/sbin/cluster/utilities/claddnode -a'xxserv2' :'ether' :'net_ether_02' : : : -n'xxserv2' 
  3. [root@xxserv1 /]#/usr/es/sbin/cluster/utilities/claddnode -a'xxserv1-stby' :'ether' :'net_ether_03' : : : -n'xxserv1'
  4. [root@xxserv1 /]#/usr/es/sbin/cluster/utilities/claddnode -a'xxserv2-stby' :'ether' :'net_ether_03' : : : -n'xxserv2' 
7.Add service ip:
  1. [root@xxserv1 /]#/usr/es/sbin/cluster/utilities/claddnode -Tservice -B'xxserv1-serv' -w'net_ether_01' 
  2. [root@xxserv1 /]#/usr/es/sbin/cluster/utilities/claddnode -Tservice -B'xxserv2-serv' -w'net_ether_01' 
8.Add Resource Group:
Extended Configuration->Extended Resource Configuration->HACMP Extended Resource Group Configuration->Add a Resource Group



9.Add persistent ip:
Extended Configuration->Extended Topology Configuration->Configure HACMP Persistent Node IP Label/Addresses->Add a Persistent Node IP Label/Address




10.Verification and Synchronization:
Extended Configuration->Extended Verification and Synchronization or you can use following command:
  1. [root@xxserv1 /]#usr/es/sbin/cluster/utilities/cldare -rt -V normal 
三.Install Oracle databse:
1.Run rootpre.sh scripts:
Before install,you must run rootpre.sh from oralce meadia:
  1. [root@xxserv1 database]#./rootpre.sh  
  2. ./rootpre.sh output will be logged in /tmp/rootpre.out_12-05-28.13:38:43 
  3.  
  4. Checking if group services should be configured....  
  5. Group "hagsuser" does not exist.  
  6. Creating required group for group services: hagsuser  
  7. Please add your Oracle userid to the group: hagsuser  
  8. Configuring HACMP group services socket for possible use by Oracle. 
  9.  
  10. The group or permissions of the group services socket have changed. 
  11.  
  12. Please stop and restart HACMP before trying to use Oracle. 
  13.  
  14.  
  15. [root@xxserv2 database]#./rootpre.sh  
  16. ./rootpre.sh output will be logged in /tmp/rootpre.out_12-05-28.13:38:11 
  17.  
  18. Checking if group services should be configured....  
  19. Group "hagsuser" does not exist.  
  20. Creating required group for group services: hagsuser  
  21. Please add your Oracle userid to the group: hagsuser  
  22. Configuring HACMP group services socket for possible use by Oracle. 
  23.  
  24. The group or permissions of the group services socket have changed. 
  25.  
  26. Please stop and restart HACMP before trying to use Oracle. 
After do above step then you can install oracle database and copy the oracle installed files to another node.Make sure that the oracle listener address is you service ip.
2.Create start and stop scripts:
  1. [root@xxserv1 /]#vi /etc/dbstart  
  2. #!/usr/bin/bash  
  3. #Define Oracle Home  
  4. ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1  
  5. #Start Oracle Listener  
  6. if [ -x $ORACLE_HOME/bin/lsnrctl ]; then  
  7. su - oracle "-c lsnrctl start"  
  8. fi  
  9. #Start Oracle Instance  
  10. if [ -x $ORACLE_HOME/bin/sqlplus ]; then  
  11. su - oracle "-c sqlplus"<<EOF  
  12. connect / as sysdba  
  13. startup  
  14. quit  
  15. EOF  
  16. fi  
  17. [root@xxserv1 /]#vi /etc/dbstop  
  18. #!/usr/bin/bash  
  19. #Define Oracle Home  
  20. ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1  
  21. #Stop Oracle Listener  
  22. if [ -x $ORACLE_HOME/bin/lsnrctl ]; then  
  23. su - oracle "-c lsnrctl stop"   
  24. fi  
  25. #Stop Oracle Instance  
  26. if [ -x $ORACLE_HOME/bin/sqlplus ]; then  
  27. su - oracle "-c sqlplus"<<EOF  
  28. connect / as sysdba  
  29. shutdown immediate  
  30. quit  
  31. EOF  
  32. fi  
  33. [root@xxserv1 /]#chmod +x /etc/dbst* 
  34. [root@xxserv1 /]#scp /etc/dbst* xxserv2:/etc 
Register A Application to Resource Group
1.Configure HACMP Application Servers:
Extended Configuration->Extended Resource Configuration->HACMP Extended Resources Configuration->Configure HACMP Application Servers->Add an Application Server



2.Create Application Monitor:
Extended Configuration->Extended Resource Configuration->Configure HACMP Application Servers->Configure HACMP Application Monitoring->Add a Process Application Monitor




3.Register A resource to resource group:
Extended Configuration->Extended Resource Configuration->HACMP Extended Resource Group Configuration->Change/Show Resources and Attributes for a Resource Group





5.Verification and Synchronization
Excute following command:

  1. [root@xxserv1 /]#usr/es/sbin/cluster/utilities/cldare -rt -V normal 
After above steps,start hacmp service and test you configuration.
6.Display HACMP Configuration:
  1. [root@xxserv1 /]#/usr/es/sbin/cluster/utilities/cldisp 
  2. Cluster: hatest 
  3.    Cluster services: active 
  4.    State of cluster: up 
  5.       Substate: stable 
  6.  
  7. ############# 
  8. APPLICATIONS 
  9. ############# 
  10.    Cluster hatest provides the following applications: example  
  11.       Application: example 
  12.          example is started by /etc/dbstart 
  13.          example is stopped by /etc/dbstop 
  14.          Application monitor of example: example 
  15.             Monitor name: example 
  16.                Type: process 
  17.                Process monitored: tnslsnr 
  18.                Process owner: oracle 
  19.                Instance count: 1 
  20.                Stabilization interval: 60 seconds 
  21.                Retry count: 3 tries 
  22.                Restart interval: 198 seconds 
  23.                Failure action: fallover 
  24.                Cleanup method: /etc/lsnrClear.sh 
  25.                Restart method: /etc/lsnrRestart.sh 
  26.          This application is part of resource group 'oradb'. 
  27.             Resource group policies: 
  28.                Startup: on first available node 
  29.                Fallover: to next priority node in the list 
  30.                Fallback: never 
  31.             State of example: online 
  32.             Nodes configured to provide example: xxserv1 {up}  xxserv2 {up}   
  33.                Node currently providing example: xxserv1 {up}  
  34.                The node that will provide example if xxserv1 fails is: xxserv2 
  35.             Resources associated with example: 
  36.                Service Labels 
  37.                   xxserv1-serv(172.16.255.15) {online} 
  38.                      Interfaces configured to provide xxserv1-serv: 
  39.                         xxserv1 {up} 
  40.                            with IP address: 172.16.255.11 
  41.                            on interface: en0 
  42.                            on node: xxserv1 {up} 
  43.                            on network: net_ether_02 {up} 
  44.                         xxserv2 {up} 
  45.                            with IP address: 172.16.255.13 
  46.                            on interface: en0 
  47.                            on node: xxserv2 {up} 
  48.                            on network: net_ether_02 {up} 
  49.                   xxserv2-serv(172.16.255.17) {online} 
  50.                      Interfaces configured to provide xxserv2-serv: 
  51.                         xxserv1 {up} 
  52.                            with IP address: 172.16.255.11 
  53.                            on interface: en0 
  54.                            on node: xxserv1 {up} 
  55.                            on network: net_ether_02 {up} 
  56.                         xxserv2 {up} 
  57.                            with IP address: 172.16.255.13 
  58.                            on interface: en0 
  59.                            on node: xxserv2 {up} 
  60.                            on network: net_ether_02 {up} 
  61.                Shared Volume Groups: 
  62.                   oradata 
  63.  
  64. ############# 
  65. TOPOLOGY 
  66. ############# 
  67.    hatest consists of the following nodes: xxserv1 xxserv2  
  68.       xxserv1 
  69.          Network interfaces: 
  70.             diskhb_01 {up} 
  71.                device: /dev/hdisk5 
  72.                on network: net_diskhb_01 {up} 
  73.             xxserv1 {up} 
  74.                with IP address: 172.16.255.11 
  75.                on interface: en0 
  76.                on network: net_ether_02 {up} 
  77.             xxserv1-stby {up} 
  78.                with IP address: 192.168.0.11 
  79.                on interface: en1 
  80.                on network: net_ether_03 {up} 
  81.       xxserv2 
  82.          Network interfaces: 
  83.             diskhb_02 {up} 
  84.                device: /dev/hdisk5 
  85.                on network: net_diskhb_01 {up} 
  86.             xxserv2 {up} 
  87.                with IP address: 172.16.255.13 
  88.                on interface: en0 
  89.                on network: net_ether_02 {up} 
  90.             xxserv2-stby {up} 
  91.                with IP address: 192.168.0.13 
  92.                on interface: en1 
  93.                on network: net_ether_03 {up} 

Before you start powerHA service,you must excute following steps on both nodes then you can run clstat command.
  1. [root@xxserv1 utilities]# snmpv3_ssw -1  
  2. Stop daemon: snmpmibd 
  3. In /etc/rc.tcpip file, comment out the line that contains: snmpmibd 
  4. In /etc/rc.tcpip file, remove the comment from the line that contains: dpid2 
  5. Stop daemon: snmpd 
  6. Make the symbolic link from /usr/sbin/snmpd to /usr/sbin/snmpdv1 
  7. Make the symbolic link from /usr/sbin/clsnmp to /usr/sbin/clsnmpne 
  8. Start daemon: dpid2 
  9. Start daemon: snmpd 
  10.  
  11. [root@xxserv2 /]# snmpv3_ssw -1  
  12. Stop daemon: snmpmibd 
  13. In /etc/rc.tcpip file, comment out the line that contains: snmpmibd 
  14. In /etc/rc.tcpip file, remove the comment from the line that contains: dpid2 
  15. Stop daemon: snmpd 
  16. Make the symbolic link from /usr/sbin/snmpd to /usr/sbin/snmpdv1 
  17. Make the symbolic link from /usr/sbin/clsnmp to /usr/sbin/clsnmpne 
  18. Start daemon: dpid2 
  19. Start daemon: snmpd