All configurations are based on the following building blocks:
Hardware
Server nodes
Storage
Networking
Software
Operating system
Cluster software
Oracle RAC (application)
Application architecture
Oracle9i RAC on RAW devices is based on a shared disk architecture. Figure 2-1 shows a two-node cluster. The lower solid line is the primary Oracle interconnect, the middle dashed line is the secondary Oracle interconnect. For high availability, both these networks should be defined in the HACMP as "private".
HACMP/ESCRM provides Oracle9i RAC with the infrastructure for concurrent access to disks. Although HACMP provides concurrent access and a disk locking mechanism, this mechanism is not used. Oracle, instead, provides its own locking mechanism for concurrent data access, integrity, and consistency.
Volume groups are varied on all the nodes, thus ensuring short failover time. This type of concurrent access can only be provided for RAW logical volumes (devices). HACMP/ESCRM does not support concurrent file systems.
Oracle datafiles use the RAW devices located on the shared disk subsystem. In this configuration, an HACMP resource group has to be defined to handle the concurrent volume groups.
Oracle9i RAC installation and configuration (on GPFS)
In a GPFS file system environment you have two choices for installing Oracle code:
1. Install the code on each node (non-shared file system)
Create a JFS or JFS2 file system on each node and set oracle.dna ownership on these file systems. When installing Real Application Clusters, the Oracle Universal Installer (OUI) will copy the Oracle code from the node from which you are running the installer to the other nodes in the cluster. This results in one copy of the Oracle binaries on each node.
2. Install the Oracle software on a GPFS file system, thus creating only one copy of the Oracle binaries. All instances will share the same code. This option is more convenient because all configuration and SW maintenance can be performed from any node in the cluster.
Single point of control instance management
On each node, create the directory /var/opt/oracle and set the ownership to the user oracle. During Oracle9i RAC installation, a file called srvConfig.loc will be created in this directory.
This file is used for specifying the destination of the Oracle server manager utility (srvctl) common configuration file. This file must be on a shared file system (GPFS), and is used by the Oracle server manager for central management of the Oracle9i RAC instances.
Oracle code file system configuration
In our environment we chose a GPFS file system to store the Oracle code. For GPFS file system configuration, refer to 3.10.2, GPFS cluster configuration.
Check if the designated Oracle code file system is mounted. We used /oracle as mount point for this file system. All binaries, Oracle logs, and initialization (init) files will be on the GPFS file system. Be sure to allocate enough space on this file system (see Example 4-6), considering future SW upgrades and Oracle log files.
Example 4-6 File system allocation
{node1:root}/-> df -PkFilesystem 1024-blocks Used Available Capacity Mounted on/dev/hd4 65536 37936 27600 58% //dev/hd2 1376256 1354380 21876 99% /usr/dev/hd9var 131072 66216 64856 51% /var/dev/hd3 917504 49744 867760 6% /tmp/dev/hd1 65536 15132 50404 24% /home/proc - - - - /proc/dev/hd10opt 131072 57436 73636 44% /opt/dev/data 144424960 129734240 14690720 90% /data/dev/oracle 9748480 6607360 3141120 68% /oracle
Change the ownership and access permission for the Oracle directory:
chown -R oracle.dba /oracle
Oracle user environment
Set up the Oracle environment in the $HOME/.profile file of the user oracle. Depending on your configuration, you may choose the user oracle home directory on an internal disk (/home/oracle, in which case you have to propagate the same environment on all nodes), or on a GPFS file system, in the /oracle directory (/oracle/home in our environment).
Example 4-7 Setting environment variables in ~/.profile
>>> Previous lines are generic environment lines (MAILMESG, PS1 etc.,)<<<<# Oracle specific environment starts HEREHOST=`hostname -s`# This stanza selects the value of ORACLE_SID variable depending on the host the oracle# user logs in.case ${HOST} in node1) SID=1;; node2) SID=2;; node3) SID=3;; node4) SID=4;;esac# Variables needed during installation and normal operationexport ORACLE_SID=rac${SID}export DISPLAY=node1:0.0export TMPDIR=/oracle/tempexport TEMP=/oracle/tempexport ORACLE_BASE=/oracleexport ORACLE_HOME=/oracle/product/9.2.0export PATH=$ORACLE_HOME/bin:$PATH
The following variables are mandatory to perform an Oracle9i RAC installation:
ATH.
ORACLE_SID is the system identifier for an Oracle server instance. This variable uniquely identifies a database instance. For consistency, we chose RAC as instance name prefix (see Example 4-7 and Table 4-1).
Table 4-1
Host name Node name Thread ID SID
node1 node1 1 rac1
node2 node2 2 rac2
node3 node3 3 rac3
node4 node4 4 rac4
Instance name selection
ORACLE_HOME is the directory that contains the Oracle software (binaries, libraries etc.).
TMPDIR and TMP: during installation Oracle needs approx. 800 MB of temporary space. For maintaining the system /tmp directory under control, we allocated a separate temporary space in the /oracle/temp directory, and assigned the two variables to point to this directory.
ORACLE_BASE specifies the base directory for Oracle software.
Also, the DISPLAY variable is needed for nodes that do not have a graphical display.
the oracle user environment by logging in to the systems (as user oracle) and displaying the variables, as shown in Example 4-8.
Example 4-8 Testing the environment
{node1:oracle}/oracle/home-> echo $ORACLE_SIDrac1
Now start the Oracle9i Universal Installer (OUI) graphical interface tool.
Creating and validating the database
This section describes how to plan and create a database and the associated storage.
We created a Real Application Clusters Database using the dbca graphical user interface (GUI).
The database can also be created manually, using scripts.
Hardware
Server nodes
Storage
Networking
Software
Operating system
Cluster software
Oracle RAC (application)
Application architecture
Oracle9i RAC on RAW devices is based on a shared disk architecture. Figure 2-1 shows a two-node cluster. The lower solid line is the primary Oracle interconnect, the middle dashed line is the secondary Oracle interconnect. For high availability, both these networks should be defined in the HACMP as "private".
HACMP/ESCRM provides Oracle9i RAC with the infrastructure for concurrent access to disks. Although HACMP provides concurrent access and a disk locking mechanism, this mechanism is not used. Oracle, instead, provides its own locking mechanism for concurrent data access, integrity, and consistency.
Volume groups are varied on all the nodes, thus ensuring short failover time. This type of concurrent access can only be provided for RAW logical volumes (devices). HACMP/ESCRM does not support concurrent file systems.
Oracle datafiles use the RAW devices located on the shared disk subsystem. In this configuration, an HACMP resource group has to be defined to handle the concurrent volume groups.
Oracle9i RAC installation and configuration (on GPFS)
In a GPFS file system environment you have two choices for installing Oracle code:
1. Install the code on each node (non-shared file system)
Create a JFS or JFS2 file system on each node and set oracle.dna ownership on these file systems. When installing Real Application Clusters, the Oracle Universal Installer (OUI) will copy the Oracle code from the node from which you are running the installer to the other nodes in the cluster. This results in one copy of the Oracle binaries on each node.
2. Install the Oracle software on a GPFS file system, thus creating only one copy of the Oracle binaries. All instances will share the same code. This option is more convenient because all configuration and SW maintenance can be performed from any node in the cluster.
Single point of control instance management
On each node, create the directory /var/opt/oracle and set the ownership to the user oracle. During Oracle9i RAC installation, a file called srvConfig.loc will be created in this directory.
This file is used for specifying the destination of the Oracle server manager utility (srvctl) common configuration file. This file must be on a shared file system (GPFS), and is used by the Oracle server manager for central management of the Oracle9i RAC instances.
Oracle code file system configuration
In our environment we chose a GPFS file system to store the Oracle code. For GPFS file system configuration, refer to 3.10.2, GPFS cluster configuration.
Check if the designated Oracle code file system is mounted. We used /oracle as mount point for this file system. All binaries, Oracle logs, and initialization (init) files will be on the GPFS file system. Be sure to allocate enough space on this file system (see Example 4-6), considering future SW upgrades and Oracle log files.
Example 4-6 File system allocation
{node1:root}/-> df -PkFilesystem 1024-blocks Used Available Capacity Mounted on/dev/hd4 65536 37936 27600 58% //dev/hd2 1376256 1354380 21876 99% /usr/dev/hd9var 131072 66216 64856 51% /var/dev/hd3 917504 49744 867760 6% /tmp/dev/hd1 65536 15132 50404 24% /home/proc - - - - /proc/dev/hd10opt 131072 57436 73636 44% /opt/dev/data 144424960 129734240 14690720 90% /data/dev/oracle 9748480 6607360 3141120 68% /oracle
Change the ownership and access permission for the Oracle directory:
chown -R oracle.dba /oracle
Oracle user environment
Set up the Oracle environment in the $HOME/.profile file of the user oracle. Depending on your configuration, you may choose the user oracle home directory on an internal disk (/home/oracle, in which case you have to propagate the same environment on all nodes), or on a GPFS file system, in the /oracle directory (/oracle/home in our environment).
Example 4-7 Setting environment variables in ~/.profile
>>> Previous lines are generic environment lines (MAILMESG, PS1 etc.,)<<<<# Oracle specific environment starts HEREHOST=`hostname -s`# This stanza selects the value of ORACLE_SID variable depending on the host the oracle# user logs in.case ${HOST} in node1) SID=1;; node2) SID=2;; node3) SID=3;; node4) SID=4;;esac# Variables needed during installation and normal operationexport ORACLE_SID=rac${SID}export DISPLAY=node1:0.0export TMPDIR=/oracle/tempexport TEMP=/oracle/tempexport ORACLE_BASE=/oracleexport ORACLE_HOME=/oracle/product/9.2.0export PATH=$ORACLE_HOME/bin:$PATH
The following variables are mandatory to perform an Oracle9i RAC installation:
ATH.
ORACLE_SID is the system identifier for an Oracle server instance. This variable uniquely identifies a database instance. For consistency, we chose RAC as instance name prefix (see Example 4-7 and Table 4-1).
Table 4-1
Host name Node name Thread ID SID
node1 node1 1 rac1
node2 node2 2 rac2
node3 node3 3 rac3
node4 node4 4 rac4
Instance name selection
ORACLE_HOME is the directory that contains the Oracle software (binaries, libraries etc.).
TMPDIR and TMP: during installation Oracle needs approx. 800 MB of temporary space. For maintaining the system /tmp directory under control, we allocated a separate temporary space in the /oracle/temp directory, and assigned the two variables to point to this directory.
ORACLE_BASE specifies the base directory for Oracle software.
Also, the DISPLAY variable is needed for nodes that do not have a graphical display.
the oracle user environment by logging in to the systems (as user oracle) and displaying the variables, as shown in Example 4-8.
Example 4-8 Testing the environment
{node1:oracle}/oracle/home-> echo $ORACLE_SIDrac1
Now start the Oracle9i Universal Installer (OUI) graphical interface tool.
Creating and validating the database
This section describes how to plan and create a database and the associated storage.
We created a Real Application Clusters Database using the dbca graphical user interface (GUI).
The database can also be created manually, using scripts.
Comments
Post a Comment