Wednesday, May 20, 2015

Finding Cache Hit Ratios History

Many DBAs consider values of cache hit ratios as mythical, but I personally don’t think so as these have guided me to the right direction many times. But this is really true that by looking at cache hit ratios solely and making conclusions might really lead you to the wrong direction. These values can only be seen and analysed in some context. High buffer cache hit ratio does not necessarily mean that your database is doing right, and low buffer cache hit ratio does not necessarily mean that your database is doing wrong.

You can check historical values and trends of your cache hit ratios using following combination of queries in 10g and above databases.
Please note that DBA_HIST% view require Oracle Diagnostic Pack license.

Please note that this information is coming from AWR repository and total number of rows returned depends on your AWR Snapshot Interval and AWR Snapshot Retention settings. For this example, I used a database where I have AWR snapshot interval of 30 minutes and AWR snapshot retention of 30 days (I did not paste here all information of 30 days though)

Find METRIC_ID for the metrics you want to check for historical trends
SQL> select metric_id,METRIC_NAME from DBA_HIST_METRIC_NAME where upper(metric_name) like '%CACHE%' order by 1;

 METRIC_ID METRIC_NAME
---------- ----------------------------------------------------------------
      2000 Buffer Cache Hit Ratio
      2000 Buffer Cache Hit Ratio
      2050 Cursor Cache Hit Ratio
      2098 Global Cache Average CR Get Time
      2099 Global Cache Average Current Get Time
      2101 Global Cache Blocks Corrupted
      2102 Global Cache Blocks Lost
      2110 Row Cache Hit Ratio
      2111 Row Cache Miss Ratio
      2112 Library Cache Hit Ratio
      2112 Library Cache Hit Ratio
      2113 Library Cache Miss Ratio
      2115 PGA Cache Hit %

Find the historical trend of metric. For RAC, also include column INSTANCE_NUMBER in the query
SQL> set lines 200
SQL>  alter session set nls_date_format='DD-MON-Yy HH24:MI:SS';
select BEGIN_TIME,END_TIME,INTSIZE,MINVAL,MAXVAL,AVERAGE from DBA_HIST_SYSMETRIC_SUMMARY where metric_id=2000 order by begin_time;
BEGIN_TIME         END_TIME              INTSIZE     MINVAL     MAXVAL    AVERAGE
------------------ ------------------ ---------- ---------- ---------- ----------
21-MAY-15 00:21:48 21-MAY-15 00:51:48     179998          0 99.9229614   98.89138
21-MAY-15 00:51:48 21-MAY-15 01:21:47     179910          0 99.4784251 96.9950951
21-MAY-15 01:21:47 21-MAY-15 01:51:47     180052          0 97.9948758 95.0564343
21-MAY-15 01:51:47 21-MAY-15 02:21:47     180007          0 99.8254444 95.9288337
21-MAY-15 02:21:47 21-MAY-15 02:51:47     179986          0 99.8268827 98.3058298
21-MAY-15 02:51:47 21-MAY-15 03:21:47     179989          0 99.8828385 98.3934493
21-MAY-15 03:21:47 21-MAY-15 03:51:47     179995          0 99.5284186 98.0050365
21-MAY-15 03:51:47 21-MAY-15 04:21:47     180004          0 99.6234565  97.367626
21-MAY-15 04:21:47 21-MAY-15 04:51:47     180016          0 99.3352565 96.9461415
21-MAY-15 04:51:47 21-MAY-15 05:21:47     180017          0 98.8389462 95.5678732
21-MAY-15 05:21:47 21-MAY-15 05:51:47     179980          0 99.7772791 98.0900195

BEGIN_TIME         END_TIME              INTSIZE     MINVAL     MAXVAL    AVERAGE
------------------ ------------------ ---------- ---------- ---------- ----------
21-MAY-15 05:51:47 21-MAY-15 06:21:47     179991          0 99.8044275 97.7869703
21-MAY-15 06:21:47 21-MAY-15 06:51:47     180000          0 99.6123821 97.5314016
21-MAY-15 06:51:47 21-MAY-15 07:21:47     179988          0 99.7801794 98.2470852
21-MAY-15 07:21:47 21-MAY-15 07:51:47     180004          0 99.7263433 97.7974523
21-MAY-15 07:51:47 21-MAY-15 08:21:47     179988          0 99.7177159 97.2940065
21-MAY-15 08:21:47 21-MAY-15 08:51:47     179997          0 99.9238153 98.4526479
21-MAY-15 08:51:47 21-MAY-15 09:21:47     180002          0 99.9203255 98.7130192
21-MAY-15 09:21:47 21-MAY-15 09:51:47     180004          0 99.7603138 95.6753558
BEGIN_TIME column shows begin time of snapshot interval
END_TIME column shows the end time of snapshot interval
INITSIZE shows interval duration which is 180000 centi-seconds or 30 minutes in this case
MINVAL shows minimum value recorded for the metrics during the interval
MAXVAL shows maximum value recorded for the metrics during the interval
AVERAGE shows average value for the metrics during the interval

Monday, May 18, 2015

ASM Diskgroup Not Accessible by oracle user

If you are having role separation for Grid Infrastructure (RAC or standalone) and Database software, you would need to have an OS user (normally named “grid”) for Grid Infrastructure installation and management, and an OS user (normally names “oracle”) for database software installation and management.

Problem Description
After installing Grid Infrastructure, ASM diskgroups are created as “sysasm” user so that database could be stored on this diskgroup, but then you realize that “oracle” user is not able to access this diskgroup (you may receive ORA-15001 error). Most of the time the reason of this problem is that “oracle” user is not member of OS group “asmdba”. If you are facing this problem, double check whether “oracle” user is member of asmdba OS group or not. While logged in as oracle user, execute “id” command to check this.

[oracle@XXXXXXX1 ~]$ id
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(oper)

Solution
While logged in as “root”, make oracle user member of asmdba OS group.
[root@XXXXXXX ~]#  usermod  -g oinstall -G dba,oper,asmdba oracle

Again log in as “oracle” and execute “id” again.
[oracle@XXXXXXX1 ~]$ id
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(oper) ,54325(asmdba)

Thursday, May 14, 2015

Universal Unique Identifier (UUID) error during 12c RAC Installation

During installation of 12.1.0.1 (12c) grid infrastructure (RAC or standalone), prerequisite check screen may show a warning for “Device Checks for ASM”, stating:
Cannot verify the shared state for device /dev/oracleasm/disk/<ASM Disk> due to Universally Unique Identifiers (UUIDs) not being found, or different values being found, for this device across nodes :[<nodes list>]
Cause: Cause of problem Not Available
Action: User Action Not Available.




















During installing 12.1.0.2 version, you will not see this warning if you are using ASMLib. But in 12.1.0.1, this shows up even if you are using ASMLib

Solution
This warning can safely be ignored if you are using ASMLib to configure ASM; as UUIDs are not required if you are using ASMLib.
If you are planning to use Raw or Block Devices, please see MOS note 580153.1 for how to configure storage for using ASM.


Tuesday, May 12, 2015

Performing Block Media Recovery

While running SQL statements on the database, suddenly SQLs may start failing with error messages showing that there is database block corruption in some datafile(s). Error messages in the Alert Log could be similar to the following.
...
ORA-01578: ORACLE data block corrupted (file # 4, block # 36321)
ORA-01110: data file 4: '
G:\ORACLE\ORADATA\TESTDB\MYDATA01.DBF '
 ...

In case of physical corruption, error message could be as follows where checksome of block is ca
computed block checksum: 0x50
Reread of blocknum=86032, file= G:\ORACLE\ORADATA\TESTDB\MYDATA02.DBF. found same corrupt data
Hex dump of (file 5, block 86032) in trace file d:\oracle\admin\testdb\bdump\testdb_j006_7340.trc
Corrupt block relative dba: 0x01415010 (file 4, block 86032)
Bad header found during buffer read
Data in bad block:
 type: 1 format: 2 rdba: 0x000051c0
 last change scn: 0x81b4.00001add seq: 0x1a flg: 0x96
 spare1: 0x0 spare2: 0x0 spare3: 0x0
 consistency value in tail: 0x68b50601
 check value in block header: 0x4
 computed block checksum: 0x50
Reread of rdba: 0x01415010 (file 5, block 86032) found same corrupted data
Thu May 07 07:08:36 2015
Corrupt Block Found
         TSN = 4, TSNAME = MYDATA
         RFN = 5, BLK = 86032, RDBA = 21057552
         OBJN = 125890, OBJD = 125890, OBJECT = FINANCIAL_TABLE, SUBOBJECT =
         SEGMENT OWNER = SCOTT, SEGMENT TYPE = Table Segment

Starting 11g R2, Oracle can perform block recovery automatically if active dataguard is configured in the environment. In this case, if primary database encounters block corruption, it will automatically try recover the block(s) by requesting block(s) from the physical standby database. If block corruption is found in the physical standby database, it can request the correct block from primary database to perform the recovery.

Oracle provides a manual method to perform recovery at block level rather than restoring and recovering the whole datafile(s) where block corruption has happened. There are 2 types of block corruption, logical block corruption and physical block corruption. Method/command to perform recovery in either case is same, but we need to have a valid RMAN backup to perform data block recovery.

To check the corruption, we use BACKUP VALIDATE command from RMAN prompt. If CHECK LOGICAL option is not used in VALIDATE command; it does not check logical corruption, but checks only physical corruption, and using CHECK LOGICAL would check for both physical and logical corruption. In the following example I am trying to check both types of corruption after I saw “block checksum” error messages in alert log file of my database for datafile number 4.
RMAN> backup validate check logical database;
Starting backup at 23-APR-15
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=526 devtype=DISK
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
input datafile fno=00021 name=G:\ORACLE\ORADATA\TESTDB\SYSTEM01.DBF
input datafile fno=00072 name=G:\ORACLE\ORADATA\TESTDB\UNDOTBS01.DBF
input datafile fno=00074 name=G:\ORACLE\ORADATA\TESTDB\USERS.DBF
input datafile fno=00076 name=G:\ORACLE\ORADATA\TESTDB\SYSAUX01.DBF
input datafile fno=00078 name=G:\ORACLE\ORADATA\TESTDB\MYDATA01.DBF
input datafile fno=00080 name=G:\ORACLE\ORADATA TESTDB\MYDATA02.DBF
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:47
Finished backup at 23-APR-15

RMAN>

Next step is to query v$database_block_corruption view which would be populated by the VALIDATE command with the details of the corrupted blocks.
SQL> select * from v$database_block_corruption;

     FILE#     BLOCK#     BLOCKS CORRUPTION_CHANGE #     CORRUPTION
---------- ---------- ---------- ------------------ ---------------------------------------------------
        4          86032          1                               0                                    CHECKSUM
         1      12281             1                               0                                    FRACTURED
         1        665               1                               0                                    FRACTURED
         2       9202              1               303801173                                    CORRUPT
         2      24605             1                               0                                    FRACTURED
         2      46357             1                               0                                    FRACTURED
         2      50364             1               303801434                                    CORRUPT
         2      50380             1               303801434                                    CORRUPT

         2      50885             1                               0                                    FRACTURED


Now here is a tricky part: I have experienced that sometimes VALIDATE command won’t show the physical corruption – db verify command (dbv) comes handy in this situation which is a special tool to check physical corruption in the datafiles.

C:\Documents and Settings\Administrator>dbv file= G:\ORACLE\ORADATA\TESTDB\MYDATA02.DBF

DBVERIFY: Release 11.2.0.3.0 - Production on Thu Apr 23 11:38:46 2015

Copyright (c) 1982, 2005, Oracle.  All rights reserved.

DBVERIFY - Verification starting : FILE = G:\ORACLE\ORADATA\TESTDB\MYDATA02.DBF
Page 86032 is marked corrupt
Corrupt block relative dba: 0x01415010 (file 5, block 86032)
Bad header found during dbv:
Data in bad block:
 type: 1 format: 2 rdba: 0x000051c0
 last change scn: 0x81b4.00001add seq: 0x1a flg: 0x96
 spare1: 0x0 spare2: 0x0 spare3: 0x0
 consistency value in tail: 0x68b50601
 check value in block header: 0x4
 computed block checksum: 0x50



DBVERIFY - Verification complete

Total Pages Examined         : 262144
Total Pages Processed (Data) : 121366
Total Pages Failing   (Data) : 0
Total Pages Processed (Index): 4
Total Pages Failing   (Index): 0
Total Pages Processed (Other): 435
Total Pages Processed (Seg)  : 0
Total Pages Failing   (Seg)  : 0
Total Pages Empty            : 140338
Total Pages Marked Corrupt   : 1
Total Pages Influx           : 0
Highest block SCN            : 1322968318 (0.1322968318)

Performing Recovery
Following example shows how we perform a datablock recovery if backup is on the tape. If backups are on the disk, just allocate channel for the disk and rest of the method is same. 
If there is a long list of corrupted blocks, all can be recovered in a single go by using command "RECOVER CORRUPTION LIST", instead of following "BOCK RECOVER" command.
RMAN> run{
2> allocate channel ch0 type 'sbt_tape' parms 'ENV=(TDPO_OPTFILE=C:\Progra~1\tivoli\tsm\agentoba64\tdpo.op
3> BLOCKRECOVER DATAFILE 4 block 86032;
4> }

released channel: ORA_DISK_1
allocated channel: ch0
channel ch0: sid=526 devtype=SBT_TAPE
channel ch0: Data Protection for Oracle: version 5.5.1.0

Starting blockrecover at 23-APR-15

channel ch0: restoring block(s)
channel ch0: specifying block(s) to restore from backup set
restoring blocks of datafile 00103
channel ch0: reading from backup piece BKP_TESTDB_824020284_58988_1
channel ch0: restored block(s) from backup piece 1
piece handle=BKP_TESTDB_824020284_58988_1 tag=TAG20130821T060022
channel ch0: block restore complete, elapsed time: 00:05:46

starting media recovery
media recovery complete, elapsed time: 00:01:25

Finished blockrecover at 23-APR-15
released channel: ch0


Thursday, May 7, 2015

Installing Oracle 12cR1 RAC on Linux 6

This article explains how to install a 2 nodes Oracle 12c release 1 Real Application Cluster (RAC) on Oracle Linux 6. I did this installation on Oracle Virtual Box by creating 2 virtual machines with shared storage. OS platform I used is Oracle Enterprise Linux 6 (update 3), and Oracle GI and RDBMS version is 12.1.0.2. Same installation guide should work for Redhat Linux 6 also. There will be no difference in installation steps if there are more than 2 nodes in the RAC setup.

Official document for 12c RAC installation can be found by clicking at the following link.
http://docs.oracle.com/database/121/CWLIN/toc.htm

Prerequisites
Prepare all the nodes by
installing Oracle Linux 6. Have private interconnects setup, and shared storage mounted on all the nodes. For this example, I have 3 disks (2G size each) for CRS diskgroup to store OCR and Voting disk, and 1 disk (4G) for DATA diskgroup to store the database.
Please note that you should have external redundancy for these disks, otherwise you should specify at least 3 different locations for storing voting disk, and 2 locations to store OCR.
Have 2 public IPs, 2 virtual IPs, 2 private IPs and 1 SCAN IP (from public subnet), which we will use later during the installation. For this article, node names for are as follows
Public
salman1.salman.com
salman2.salman.com

Virtual
salman1-vip.salman.com
salman2-vip.salman.com

Private
salman1-priv.salman.com
salman2.priv.salman.com
SCAN
salman-scan.salman.com

Recommended way for SCAN configuration is to have 3 IP addresses which are resolved through DNS. Since I don’t have DNS for my environment, I will be using entry in the host file for the SCAN, and will use a single SCAN IP address

Download and unzip the software
Download 12c Grid Infrastructure and RDBMS softwares.
First release (12.1.0.1) can be downloaded from www.oracle.com, but since we are installing 12.1.0.2 here, we would need to download patch set 1769377. First 2 zip files are for database and last 2 are for Grid Infrastructure binaries.













1)
On each node
, edit /etc/selinux/config and set value for SELINUX to either “permissive” or “disabled”
SELINUX=permissive

2)
On each node
, configure Shared Memory File System. Add following line in /etc/fstab file for shared memory file system. Modify the value of “size” based on the amount of memory you will be using for your SGA.
tmpfs                                   /dev/shm                tmpfs   rw,exec,size=8g        0 0

3)
On each node, disable the firewall.
 
[root@salman1 ~]# service iptables stop
iptables: Flushing firewall rules:                         [  OK  ]
iptables: Setting chains to policy ACCEPT: filter          [  OK  ]
iptables: Unloading modules:                               [  OK  ]

[root@salman1 ~]# chkconfig iptables off



4)
Reboot all nodes

5)
On each node
, in /etc/host file, add the IP addresses and fully qualified names of each node of RAC, including public IPs; virtual IP; private IPs and SCAN IP
#Public
192.231.231.40 salman1.salman.com       salman1
192.231.231.41 salman2.salman.com       salman2

#Virtual
192.231.231.50 salman1-vip.salman.com   salman1-vip
192.231.231.51 salman2-vip.salman.com   salman2-vip

#Private
10.10.10.10 salman1-priv.salman.com     salman1-priv
10.10.10.11 salman2-priv.salman.com     salman2-priv

#SCAN
192.231.231.60 salman-scan.salman.com   salman-scan
Make sure that Public interface and Private interfaces have the same name and they list in same order on all the nodes, in ifconfig command (my public interface name is eth0 and private interface name is eth1)

6)
Test the nodes connectivity with each other using ping command from each node to the other node(s). SCAN IP and Virtual IPs are not required to be tested at this point.
From node salman1
[root@salman1 ~]#  ping salman1
[root@salman1 ~]#  ping salman1-priv
[root@salman1 ~]#  ping salman2
[root@salman1 ~]#  ping salman2-priv

From node salman2
[root@salman2 ~]#  ping salman2
[root@salman2 ~]#  ping salman2-priv
[root@salman2 ~]#  ping salman1
[root@salman2 ~]#  ping salman1-priv


7)
We can perform automatic configuration of the node using “yum” command. If you want to do manual configuration, skip this step and go to next step.
Automatic configuration would perform following tasks
- Installation of required RPM packages
- Setup kernel parameters in /etc/sysctl.conf file
- Creation of OS groups (oinstall, dba) and OS user (oracle)
- Setting limits for installation user “oracle”

For Oracle Linux, follow the steps mentioned in the following documents to access the online yum repository.
http://public-yum.oracle.com/

On each node, execute following command to perform all prerequisites automatically.
[root@salman1 ~]# yum install oracle-rdbms-server-12cR1-preinstall –y

As already mentioned, above command will install all required packages which are needed for grid infrastructure and/or RDBMS software installation. If you also plan to install a 32-bit client on this x86_64 server, you would also need to install 32-bit version of these RPM packages manually.
compat-libstdc++-33-3.2.3-69.el6.i686
glibc-2.12-1.7.el6 (i686)
glibc-devel-2.12-1.7.el6.i686
libgcc-4.4.4-13.el6 (i686)
libstdc++-4.4.4-13.el6.i686
libstdc++-devel-4.4.4-13.el6.i686
libaio-0.3.107-10.el6.i686
libaio-devel-0.3.107-10.el6.i686
libXext-1.1 (i686)
libXtst-1.0.99.2 (i686)
libX11-1.3 (i686)
libXau-1.0.5 (i686)
libxcb-1.5 (i686)
libXi-1.3 (i686)
make-3.81-19.el6
sysstat-9.0.4-11.el6 (x86_64)
nfs-utils-1.2.3-15.0.1

If you want to install form Oracle Linux installation media, use “rpm -i” command to install the packages.
Automatic configuration also sets the required kernel parameters in /etc/sysctl.conf file.
Automatic configuration would create default OS groups i.e. oinstall and dba (with group ID 54321 and 54322 respectively), and OS user (oracle) with user ID 54321. If you plan to implement role separation by creating a “grid” user to install and manage grid infrastructure, and use “oracle” user for database software, you would need to create more OS groups and “grid” user manually, as follows (on both nodes).
Add groups
[root@salman1 ~]#  groupadd -g 54323 oper
[root@salman1 ~]#  groupadd -g 54325 asmdba
[root@salman1 ~]#  groupadd -g 54328 asmadmin
[root@salman1 ~]#  groupadd -g 54329 asmoper

Add user
[root@salman1 ~]# useradd -u 54322 -g oinstall -G dba,asmdba,asmadmin,asmoper grid

Set passwords for both users (oracle and grid)
[root@salman1 ~]# passwd oracle
[root@salman1 ~]# passwd grid

Make user oracle member of oper and asmdba groups
[root@salman1 ~]#  usermod  -g oinstall -G dba,oper,asmdba oracle

Automatic configuration also sets the resource limits for the default user “oracle” in /etc/security/limits.conf file. If you have created user “grid” for grid infrastructure installation, you would also need to set these limits manually for grid user.
grid soft nofile 1024
grid hard nofile 65536
grid soft nproc 16384
grid hard nproc 16384
grid soft stack 10240
grid hard stack 32768
grid hard memlock 134217728
grid soft memlock 134217728
 
8)
Skip this step if you have followed Step 7 above; otherwise perform following tasks on each node.

Install following RPM packages (or latest version) from either yum repository or from Linux 6 media
binutils-2.20.51.0.2-5.11.el6 (x86_64)
compat-libcap1-1.10-1 (x86_64)
compat-libstdc++-33-3.2.3-69.el6 (x86_64)
compat-libstdc++-33-3.2.3-69.el6.i686
gcc-4.4.4-13.el6 (x86_64)
gcc-c++-4.4.4-13.el6 (x86_64)
glibc-2.12-1.7.el6 (i686)
glibc-2.12-1.7.el6 (x86_64)
glibc-devel-2.12-1.7.el6 (x86_64)
glibc-devel-2.12-1.7.el6.i686
ksh
libgcc-4.4.4-13.el6 (i686)
libgcc-4.4.4-13.el6 (x86_64)
libstdc++-4.4.4-13.el6 (x86_64)
libstdc++-4.4.4-13.el6.i686
libstdc++-devel-4.4.4-13.el6 (x86_64)
libstdc++-devel-4.4.4-13.el6.i686
libaio-0.3.107-10.el6 (x86_64)
libaio-0.3.107-10.el6.i686
libaio-devel-0.3.107-10.el6 (x86_64)
libaio-devel-0.3.107-10.el6.i686
libXext-1.1 (x86_64)
libXext-1.1 (i686)
libXtst-1.0.99.2 (x86_64)
libXtst-1.0.99.2 (i686)
libX11-1.3 (x86_64)
libX11-1.3 (i686)
libXau-1.0.5 (x86_64)
libXau-1.0.5 (i686)
libxcb-1.5 (x86_64)
libxcb-1.5 (i686)
libXi-1.3 (x86_64)
libXi-1.3 (i686)
make-3.81-19.el6
sysstat-9.0.4-11.el6 (x86_64)
nfs-utils-1.2.3-15.0.1
unixODBC

Example (yum)
[root@salman1 ~]# yum install glibc

Example (Linux Media)
[root@salman1 ~]# rpm -i glibc

Example (Check after install)
[root@salman1 ~]# rpm -q glibc
glibc-2.17-55.el7.x86_64


On each node, edit /etc/sysctl.conf add following entries to set kernel parameters
fs.file-max = 6815744
kernel.sem = 250 32000 100 128
kernel.shmmni = 4096
kernel.shmall = 4294967296
kernel.shmmax = 4398046511104
kernel.panic_on_oops = 1
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range = 9000 65500
Execute following command after adding above lines
/sbin/sysctl -p

On each node, add groups and users for Grid Infrastructure and Database softwares. “grid” user will be owner and administrator of Grid Infrastructure and “oracle” user will be owner and administrator of database.
Add groups
[root@salman1 ~]# groupadd -g 54321 oinstall
[root@salman1 ~]# groupadd -g 54322 dba
[root@salman1 ~]# groupadd -g 54323 oper
[root@salman1 ~]# groupadd -g 54325 asmdba
[root@salman1 ~]# groupadd -g 54328 asmadmin
[root@salman1 ~]# groupadd -g 54329 asmoper

Add users
[root@salman1 ~]# useradd -u 54321 -g oinstall -G dba,oper,asmdba oracle
[root@salman1 ~]# useradd -u 54322 -g oinstall -G dba,asmdba,asmadmin,asmoper grid

Set passwords for both users (oracle and grid)
[root@salman1 ~]# passwd oracle
[root@salman1 ~]# passwd grid

If you don’t want role separation for grid infrastructure and database, you may create a single user i.e. “oracle” and do both Grid Infrastructure and Oracle RDBMS installation with this single user.
[root@salman1 ~]# useradd -u 54321 -g oinstall -G dba,oper, asmdba, asmadmin,asmoper oracle


On each nodes, create a .conf file (file name can be anything) under /etc/security/limits.d directory to set shell limits for grid and oracle users. For example, create file oracleusers.conf with following entries. Alternatively, you can also set the limits in /detc/security/limits.conf. But I would prefer to set under /etc/security/limits.d directory.
# Grid user
grid soft nofile 1024
grid hard nofile 65536
grid soft nproc 16384
grid hard nproc 16384
grid soft stack 10240
grid hard stack 32768
grid hard memlock 134217728
grid soft memlock 134217728

# Oracle user
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft nproc 16384
oracle hard nproc 16384
oracle soft stack 10240
oracle hard stack 32768
oracle hard memlock 134217728
oracle soft memlock 134217728

9) 
On each node, install oracleasm-support and oracleasmlib, and then configure oracleasm.
oracleasm kernel driver is built in Oracle Linux and does not need to be installed. After installing oracleasm-support and oracleasmlib packages, oracleasm driver starts working.

If you are using some other flavour of Linux, for example RedHat Linux, then you would need to install all 3 packages (oracleasm driver, oracleasm-support and oracleasmlib).
Install oracleasm-support from yum repository or from Linux media, then download oracleasmlib package form the following URL, and install.
http://www.oracle.com/technetwork/server-storage/linux/asmlib/ol6-1709075.html
Install oracleasmlib and oracleasm-support
[root@salman1 ~]#  yum install oracleasm-support
[root@salman1 ~]#  rpm -i oracleasm-lib*

Configure oracleasm (highlighted in red are the inputs during configuration)
[root@salman1 ~]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Scan for Oracle ASM disks on boot (y/n) [y]:y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver:                     [  OK  ]
Scanning the system for Oracle ASMLib disks:               [  OK  ]

Check Configuration
[root@salman1 ~]# /usr/sbin/oracleasm configure
ORACLEASM_UID=grid
ORACLEASM_GID=asmadmin
ORACLEASM_SCANBOOT=true
ORACLEASM_SCANORDER=""
ORACLEASM_SCANEXCLUDE=""
ORACLEASM_USE_LOGICAL_BLOCK_SIZE="false"

10)
On each node
, add following in /etc/pam.d/login file if not present already
session required /lib64/security/pam_limits.so
session required pam_limits.so

11)
On each node,
stop and disable NTP (Network Time Protocol) and avahi-daemon.
During the installation, Cluster Time Synchronization Service would be installed and used instead or NTP.
NTP
[root@salman1 ~]# /sbin/service ntpd stop
[root@salman1 ~]# chkconfig ntpd off
[root@salman1 ~]# mv /etc/ntp.conf /etc/ntp.conf.org
[root@salman1 ~]# rm /var/run/ntpd.pid

avahi-daemon
[root@salman1 ~]# /etc/init.d/avahi-daemon stop
Shutting down Avahi daemon:                                [  OK  ]

[root@salman1 ~]# chkconfig avahi-daemon off

[root@salman1 ~]# chkconfig --list avahi-daemon
avahi-daemon    0:off   1:off   2:off   3:off   4:off   5:off   6:off
[root@salman1 ~]#

12)
On each node,
make sure /etc/resolv.conf file contain entries similar to the following. Replace salman.com with your domain name and 192.231.231.1 with your names server IP address
search salman.com
nameserver 192.231.231.1

13)
On each nodee
, install cvuqdisk RPM
This is required for cluvfy (Cluster Verification Utility) to work properly to discover shared disks, otherwise an error will be reported while running cluvfy. Log in as root and execute following steps
Set the environment variable CVUQDISK_GRP to point to the group what will own cvuqdisk, which is oinstall group
[root@salman1 ~]# export CVUQDISK_GRP=oinstall

Install cvuqdisk which will be found under unzipd Grid Infrastructure software
[root@salman1 ~]# cd /u02/12.1.0.2/grid/rpm
[root@salman1 ~]#  rpm –i cvuqdisk-1.0.9-1.rpm

14)
On each node
, create directories and change ownership to respective users (grid and oracle).
[root@salman1 ~]# mkdir -p /u01/app/12.1.0/grid
[root@salman1 ~]# mkdir -p /u01/app/grid
[root@salman1 ~]# mkdir -p /u01/app/oracle

[root@salman1 ~]# chown -R grid:oinstall /u01
[root@salman1 ~]# chown oracle:oinstall /u01/app/oracle
[root@salman1 ~]# chmod -R 775 /u01/

15)
From any node,
partition the shared disks. As stated in the beginning, I have total 4 disks - 3 for OCR and Voting disk, and 1 for DATA diskgroup. Highlighted disks are the disks I will be using here to create partitions and then creating ASM disks using oracleasm.
[root@salman1 ~]# ls -l /dev/sd*
brw-rw----. 1 root disk 8,  0 Apr  7 11:38 /dev/sda
brw-rw----. 1 root disk 8,  1 Apr  7 11:38 /dev/sda1
brw-rw----. 1 root disk 8,  2 Apr  7 11:38 /dev/sda2
brw-rw----. 1 root disk 8,  3 Apr  7 11:38 /dev/sda3
brw-rw----. 1 root disk 8,  4 Apr  7 11:38 /dev/sda4
brw-rw----. 1 root disk 8,  5 Apr  7 11:38 /dev/sda5
brw-rw----. 1 root disk 8, 16 Apr  7 11:38 /dev/sdb
brw-rw----. 1 root disk 8, 17 Apr  7 11:38 /dev/sdb1
brw-rw----. 1 root disk 8, 32 Apr  7 11:38 /dev/sdc
brw-rw----. 1 root disk 8, 48 Apr  7 11:38 /dev/sdd
brw-rw----. 1 root disk 8, 64 Apr  7 11:38 /dev/sde
brw-rw----. 1 root disk 8, 80 Apr  7 11:38 /dev/sdf

Use following steps to partition each disk (/dev/sdc, /dev/sdd, /dev/sde, /dev/sdf).
[root@salman1 ~]# fdisk /dev/sdc
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x457c3565.

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-25165823, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-25165823, default 25165823):
Using default value 25165823
Partition 1 of type Linux and of size 12 GiB is set

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

For simplicity, after executing command fdisk for each disk, press following sequence keys
n, p, 1, <enter>, <enter>, w to create new partitions for each disk
After the partitioning completes, following are my partitioned disks
[root@salman1 ~]# ls -l /dev/sd*
brw-rw----. 1 root disk 8, 32 Apr  7 12:04 /dev/sdc
brw-rw----. 1 root disk 8, 33 Apr  7 12:04 /dev/sdc1
brw-rw----. 1 root disk 8, 48 Apr  7 12:08 /dev/sdd
brw-rw----. 1 root disk 8, 49 Apr  7 12:08 /dev/sdd1
brw-rw----. 1 root disk 8, 64 Apr  7 12:08 /dev/sde
brw-rw----. 1 root disk 8, 65 Apr  7 12:08 /dev/sde1
brw-rw----. 1 root disk 8, 80 Apr  7 12:08 /dev/sdf
brw-rw----. 1 root disk 8, 81 Apr  7 12:08 /dev/sdf1

16)
From any node, use oracleasm command to create ASM disks.
[root@salman1 ~]# oracleasm createdisk crs1 /dev/sdc1
Writing disk header: done
Instantiating disk: done
[root@salman1 ~]# oracleasm createdisk crs2 /dev/sdd1
Writing disk header: done
Instantiating disk: done
[root@salman1 ~]# oracleasm createdisk crs3 /dev/sde1
Writing disk header: done
Instantiating disk: done
[root@salman1 ~]# oracleasm createdisk data /dev/sdf1
Writing disk header: done
Instantiating disk: done
[root@salman1 ~]#
[root@salman1 sys]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
[root@salman1 sys]# oracleasm listdisks
CRS1
CRS2
CRS3
DATA

Execute “oracleasm scandisks” and “oracleasm listdisks” command on all other nodes also and you should be able to see all the ASM disks.
From the other node(s), issue following as root
[root@salman2 ~]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...

[root@salman2 ~]# oracleasm listdisks
CRS1
CRS2
CRS3
DATA

17)
On each node, add IO scheduler for ASM disks. Modify highlighted disks names to match your environment.
[root@salman1 ~]# echo deadline > /sys/block/sdc/queue/scheduler
[root@salman1 ~]# echo deadline > /sys/block/sdd/queue/scheduler
[root@salman1 ~]# echo deadline > /sys/block/sde/queue/scheduler
[root@salman1 ~]# echo deadline > /sys/block/sdf/queue/scheduler

18)
On each node
, add following in /home/<username>/.bash_profile file of both users; “grid” and “oracle”. At this point, we do no need to set other environment variables i.e. ORACLE_BASE or ORACLE_HOME.
umask 022

19)
Run cluvfy (Optional)
In the node where you plan to initiate the installation, open a terminal and log in as “grid” user, and you can optionally execute cluster verification utility (cluvfy). Click here to see the output of cluvfy
[grid@salman1 ~]$ cd /u02/12.1.0.2/grid/
[grid@salman1 ~]$ ./runcluvfy.sh stage -pre crsinst -n salman1 -fixup -verbose
This will fail if user equivalence (passwordless ssh login) has not been configured from each node to other node. Passwordless login using ssh is also optional at this point, because installer will allow us to configure user equivalence during the installation wizard as you will see in a while. To learn how to configure SSH connectivity prior to the installation, click here


20)
Choose any node form where you want to initiate the installation and log in as installation user (“grid” in this example). You would choose desktop logging in to the server. If you did not install any desktop environment, use X Window System.
























21)
Start installation from the extracted software directory (/u02/12.1.0.2/grid), execute “./runInstaller” to begin the installation

22)
Click Next




























23)
Click Next




























24)
Select “Advance Installation” and click Next




























25)
Click Next




























26) Change Cluster Name and SCAN name and SCAN Port, if needed. Make sure that SCAN Name is same as specified in /etc/host file. Uncheck “Configure GNS”. Click Next




























27)
You will see only first node (where you initiated installation) here. Click Add to open a window to provide details of the second node. You would need to add details of every node in your RAC environment by clicking Add button.
Click SSH Connectivity. If you have already done SSH configuration for passwordless login to the remote node(s), you should skip next setup.




























28)
After clicking SSH Connectivity, you will see text boxes to provide username (grid) and password to setup SSH connectivity. Provide password and click on Setup button. It will take a few seconds to configure the SSH.




























29)
Once SSH configuration completes, click OK, then click Next




























30)
Select correct network interfaces for Public and Private interconnects and click Next



























31) 
Click Next




























32)Provide diskgroup name. I am providing CRS as I would store OCR and Voting disk on this diskgroup. I am also selecting “External” redundancy. If your disks are not already mirrored, you should use either Normal (minimum 2 disks required) or High redundancy (minimum 3 disks required) to safeguard against disks failure. Select ASM disks to be part of this diskgroup




























33)
Provide password for user SYS (SYSASM) and click Next





























34)
Click Next





























35)
Click Next





























36)
Click Next





























37)
If required, modify the paths based on the directories created in a previous step. Or you can go ahead with the default locations. Click Next





























38)
Click Next





























39)
Select this option if you want the required scripts to be executed automatically by OUI rather than prompting you to execute the scripts as “root”; during the installation





























40) 
If you see fail/warning for any prerequisite check, fix it before clicking Next. In my case, I am ignoring all of them because of the following reasons.
i) I am using virtual Box with 2G RAM, so I ignored Physical Memory Warning
ii) I am not using DNS, so I ignored resolv.conf failure
iii) I am not using DNS, so I ignored DNS/NIS failure
Click Next




























41)
Click Install button to start installation




























42)
Installer will prompt for the permission to execute the scripts as root. Click Yes




























43)
Installation completed. I see one error here and that is because of cluvfy execution failure during post installation steps. Reason of cluvfy failure is same as I explained above in step 40 above, so I just clicked OK to close the box

























44) 
Click Yes



























45) 
Click Close, installation has successfully completed




























46)
On both nodes, add following environment variables in .bash_profile file. On node2, ORACLE_SID value should be +ASM2, and so on. 
##grid user environment variables
ORACLE_SID=+ASM1; export ORACLE_SID
ORACLE_BASE=/u01/app/grid; export ORACLE_BASE
ORACLE_HOME=/u01/app/12.1.0/grid; export ORACLE_HOME
ORACLE_TERM=xterm; export ORACLE_TERM
PATH=$ORACLE_HOME/bin:$PATH; export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib:/usr/local/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib; export CLASSPATH
export TEMP=/tmp
export TMPDIR=/tmp

47)
Check health of CRS. All seems OK and working properly.
[grid@salman2 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRS.dg
               ONLINE  ONLINE       salman1                  STABLE
               ONLINE  ONLINE       salman2                  STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       salman1                  STABLE
               ONLINE  ONLINE       salman2                  STABLE
ora.asm
               ONLINE  ONLINE       salman1                  Started,STABLE
               ONLINE  ONLINE       salman2                  Started,STABLE
ora.net1.network
               ONLINE  ONLINE       salman1                  STABLE
               ONLINE  ONLINE       salman2                  STABLE
ora.ons
               ONLINE  ONLINE       salman1                  STABLE
               ONLINE  ONLINE       salman2                  STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       salman1                  STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       salman1                  169.254.93.149 10.1                             0
                                                             .10.10,STABLE
ora.cvu
      1        ONLINE  ONLINE       salman1                  STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       salman1                  Open,STABLE
ora.oc4j
      1        ONLINE  ONLINE       salman1                  STABLE
ora.salman1.vip
      1        ONLINE  ONLINE       salman1                  STABLE
ora.salman2.vip
      1        ONLINE  ONLINE       salman2                  STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       salman1                  STABLE
--------------------------------------------------------------------------------

[grid@salman2 ~]$ srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node salman1

[grid@salman2 ~]$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node salman1


48) 
Installing Database software
Before we start database software installation, first we create a diskgroup DATA which will be used to store the database; we will be creating during the database software installation.
Open a terminal and log in as OS user “grid”. Now use SQLPLUS to log into the ASM instance with SYSASM privileges, and then create the diskgroup DATA

[grid@salman1 ~]$ sqlplus

SQL*Plus: Release 12.1.0.2.0 Production on Mon Apr 20 12:11:05 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Enter user-name: sys as sysasm
Enter password:

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options


SQL> select name, path from v$asm_disk;

NAME                           PATH
------------------------------ ----------------------------------------------------------------------------------
                                        ORCL:DATA
CRS1                               ORCL:CRS1
CRS2                               ORCL:CRS2
CRS3                             ORCL:CRS3

Compatibility for ASM and RDBMS should be specified, otherwise it would default to 10.1.0.0.0 and we will not be able to store 12c database files on it.

SQL>  create diskgroup DATA external redundancy disk  'ORCL:DATA' ATTRIBUTE 
'compatible.asm'='12.1.0.0.0' , 'compatible.rdbms'='12.1.0.0.0';

Diskgroup created.

If you created this diskgroup from node1, use srvctl command to start this new disk resource on all other nodes
[grid@salman1 ~]$ srvctl start diskgroup -diskgroup DATA -node salman2

Since I am using 2 different OS users here, from any node, log in as user “oracle” by using either XWindow or Linux desktop, and initiate installation from the directory where database software has already been extracted. /u02/12.1.02/database
[oracle@salman1 database]$cd /u02/12.1.02/database
[oracle@salman1 database]$./runInstaller
Click Next




























49)
I want a new database to be created during the installation. Click Next





























50)
I selected Server class database. Click Next





























51)
We will be doing RAC database installation so that software is installed on all the RAC nodes. Click Next





























52)
Click Next





























53) 
By default all nodes are selected for database software installation, click on “SSH Connectivity” button as previously done during Grid infrastructure installation and configure the SSH connectivity before moving forward. This time SSH will be configured for “oracle” user. Once configured, click Next




























54)
Click Next




























55)
Click Next




























56)
Click Next




























57)
Modify paths if required, click Next


58)
Select database type which you want to create, click Next




























59)
Provide database name and also check the checkbox if you want it to be a container database




























60)
Specify memory allocation details for the database instances, click Next




























61) Select “Oracle Automatic Storage Management” as we already have an ASM disk created to store the database.




























62) 
Click Next




























63)
Click Next




























64)
Select diskgroup which we created to store the database files, click Next




























65)
Specify the passwords for administrator users, click Next




























66)
Click Next




























67)

On last screen, I see 4 warnings which I can ignore because of following reasons.
i) Since I am not using namesserver, I faced warning number 1 and 2, so I can ignore both
ii) Just like during Grid Infrastructure installation, OUI had an issue in recognizing actual value of my /dev/shm files system, same happened here, so I can ignore it.
iii) Limits for oracle user are already set in /etc/security/limits.d/oracle-rdbms-server-12cR1-preinstall.conf which can be confirmed using “ulimit –a” command, but OUI is checking /etc/security/limits.conf file for limits, so I can ignore this warning also. Click Next




























68)
Click Install




























69)
During the installation, you will be prompted to execute the root.sh on both nodes as root. When prompted, executed on both nodes one by one, click OK to close the box so that installation could proceed further.

























70)
Database Creation Assistant would initiate to create the database



























71)
Click OK to close the prompt after database creation completes.




























72)
Click Close.
























73)
Add following environment variables setting in .bash_profile of “oracle” user, on both nodes. Modify value of ORACLE_SID for each node accordingly
ORACLE_SID=salmandb1; export ORACLE_SID
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1; export ORACLE_HOME
ORACLE_TERM=xterm; export ORACLE_TERM
PATH=$ORACLE_HOME/bin:$PATH; export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib:/usr/local/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib; export CLASSPATH
export TEMP=/tmp
export TMPDIR=/tmp


74)
RAC Installation completes successfully. Check the status of all resources and database instances
[grid@salman1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRS.dg
               ONLINE  ONLINE       salman1                  STABLE
               ONLINE  ONLINE       salman2                  STABLE
ora.DATA.dg
               ONLINE  ONLINE       salman1                  STABLE
               ONLINE  ONLINE       salman2                  STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       salman1                  STABLE
               ONLINE  ONLINE       salman2                  STABLE
ora.asm
               ONLINE  ONLINE       salman1                  Started,STABLE
               ONLINE  ONLINE       salman2                  Started,STABLE
ora.net1.network
               ONLINE  ONLINE       salman1                  STABLE
               ONLINE  ONLINE       salman2                  STABLE
ora.ons
               ONLINE  ONLINE       salman1                  STABLE
               ONLINE  ONLINE       salman2                  STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       salman1                  STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       salman1                  169.254.93.149,STABL
                                                             E
ora.cvu
      1        ONLINE  ONLINE       salman1                  STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       salman1                  Open,STABLE
ora.oc4j
      1        ONLINE  ONLINE       salman1                  STABLE
ora.salman1.vip
      1        ONLINE  ONLINE       salman1                  STABLE
ora.salman2.vip
      1        ONLINE  ONLINE       salman2                  STABLE
ora.salmndb.db
      1        ONLINE  ONLINE       salman1                  Open,STABLE
      2        ONLINE  ONLINE       salman2                  Open,STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       salman1                  STABLE
--------------------------------------------------------------------------------


[grid@salman1 ~]$ srvctl status database -d salmndb
Instance salmndb1 is running on node salman1
Instance salmndb2 is running on node salman2


Related Article