Skip to main content

Typical Oracle RAC configurations

All configurations are based on the following building blocks:




Hardware

Server nodes

Storage

Networking

Software

Operating system

Cluster software

Oracle RAC (application)

Application architecture


Oracle9i RAC on RAW devices is based on a shared disk architecture. Figure 2-1 shows a two-node cluster. The lower solid line is the primary Oracle interconnect, the middle dashed line is the secondary Oracle interconnect. For high availability, both these networks should be defined in the HACMP as "private".


















HACMP/ESCRM provides Oracle9i RAC with the infrastructure for concurrent access to disks. Although HACMP provides concurrent access and a disk locking mechanism, this mechanism is not used. Oracle, instead, provides its own locking mechanism for concurrent data access, integrity, and consistency.


Volume groups are varied on all the nodes, thus ensuring short failover time. This type of concurrent access can only be provided for RAW logical volumes (devices). HACMP/ESCRM does not support concurrent file systems.

Oracle datafiles use the RAW devices located on the shared disk subsystem. In this configuration, an HACMP resource group has to be defined to handle the concurrent volume groups.

Oracle9i RAC installation and configuration (on GPFS)

In a GPFS file system environment you have two choices for installing Oracle code:

1. Install the code on each node (non-shared file system)

Create a JFS or JFS2 file system on each node and set oracle.dna ownership on these file systems. When installing Real Application Clusters, the Oracle Universal Installer (OUI) will copy the Oracle code from the node from which you are running the installer to the other nodes in the cluster. This results in one copy of the Oracle binaries on each node.

2. Install the Oracle software on a GPFS file system, thus creating only one copy of the Oracle binaries. All instances will share the same code. This option is more convenient because all configuration and SW maintenance can be performed from any node in the cluster.
Single point of control instance management

On each node, create the directory /var/opt/oracle and set the ownership to the user oracle. During Oracle9i RAC installation, a file called srvConfig.loc will be created in this directory.

This file is used for specifying the destination of the Oracle server manager utility (srvctl) common configuration file. This file must be on a shared file system (GPFS), and is used by the Oracle server manager for central management of the Oracle9i RAC instances.

Oracle code file system configuration

In our environment we chose a GPFS file system to store the Oracle code. For GPFS file system configuration, refer to 3.10.2, GPFS cluster configuration.

Check if the designated Oracle code file system is mounted. We used /oracle as mount point for this file system. All binaries, Oracle logs, and initialization (init) files will be on the GPFS file system. Be sure to allocate enough space on this file system (see Example 4-6), considering future SW upgrades and Oracle log files.

Example 4-6 File system allocation

{node1:root}/-> df -PkFilesystem 1024-blocks Used Available Capacity Mounted on/dev/hd4 65536 37936 27600 58% //dev/hd2 1376256 1354380 21876 99% /usr/dev/hd9var 131072 66216 64856 51% /var/dev/hd3 917504 49744 867760 6% /tmp/dev/hd1 65536 15132 50404 24% /home/proc - - - - /proc/dev/hd10opt 131072 57436 73636 44% /opt/dev/data 144424960 129734240 14690720 90% /data/dev/oracle 9748480 6607360 3141120 68% /oracle

Change the ownership and access permission for the Oracle directory:

chown -R oracle.dba /oracle
Oracle user environment

Set up the Oracle environment in the $HOME/.profile file of the user oracle. Depending on your configuration, you may choose the user oracle home directory on an internal disk (/home/oracle, in which case you have to propagate the same environment on all nodes), or on a GPFS file system, in the /oracle directory (/oracle/home in our environment).

Example 4-7 Setting environment variables in ~/.profile

>>> Previous lines are generic environment lines (MAILMESG, PS1 etc.,)<<<<# Oracle specific environment starts HEREHOST=`hostname -s`# This stanza selects the value of ORACLE_SID variable depending on the host the oracle# user logs in.case ${HOST} in node1) SID=1;; node2) SID=2;; node3) SID=3;; node4) SID=4;;esac# Variables needed during installation and normal operationexport ORACLE_SID=rac${SID}export DISPLAY=node1:0.0export TMPDIR=/oracle/tempexport TEMP=/oracle/tempexport ORACLE_BASE=/oracleexport ORACLE_HOME=/oracle/product/9.2.0export PATH=$ORACLE_HOME/bin:$PATH

The following variables are mandatory to perform an Oracle9i RAC installation:
ATH.

ORACLE_SID is the system identifier for an Oracle server instance. This variable uniquely identifies a database instance. For consistency, we chose RAC as instance name prefix (see Example 4-7 and Table 4-1).

Table 4-1

Host name Node name Thread ID SID
node1 node1 1 rac1
node2 node2 2 rac2
node3 node3 3 rac3
node4 node4 4 rac4

Instance name selection

ORACLE_HOME is the directory that contains the Oracle software (binaries, libraries etc.).
TMPDIR and TMP: during installation Oracle needs approx. 800 MB of temporary space. For maintaining the system /tmp directory under control, we allocated a separate temporary space in the /oracle/temp directory, and assigned the two variables to point to this directory.
ORACLE_BASE specifies the base directory for Oracle software.

Also, the DISPLAY variable is needed for nodes that do not have a graphical display.
 the oracle user environment by logging in to the systems (as user oracle) and displaying the variables, as shown in Example 4-8.

Example 4-8 Testing the environment

{node1:oracle}/oracle/home-> echo $ORACLE_SIDrac1

Now start the Oracle9i Universal Installer (OUI) graphical interface tool.

Creating and validating the database


This section describes how to plan and create a database and the associated storage.
We created a Real Application Clusters Database using the dbca graphical user interface (GUI).

The database can also be created manually, using scripts.

Comments

Popular posts from this blog

How to configure multipath Debian CentOS for IBM Storage

This detailed how to guides to achieve high availability and performance on Debian and CentOS for accessing storage space at IBM DS8300 Data Storage Systems. Tested on Debian GNU/Linux 5.x Lenny 64 bits and CentOS 5.3 64 bits running on 8 cores blades, with Host Bus Adapters Qlogic and Emulex Light Pulse Fiber Channel in deployed systems at SERPRO . Observations showed that Debian Lenny has the best performance, for our app load profile and hardware. Also, there are listed a number of previously not clearly documented critical pitfalls to avoid. STUDY whole articles, hints, implications, and cited resources before planning your deployment. Every detail matters . Before start, you must have LUNs at IBM DS8300 storage configured for high availability and performance as explained at the article How to configure maximum performance storage space for Debian GNU/Linux on IBM DS 8300 Data Storage Systems . Multipath and storage basic concepts In order t...

Squid Access Lists

Access Lists There are a number of different access lists: http_access : Allows HTTP clients (browsers) to access the HTTP port. This is the primary access control list. http_reply_access : Allows HTTP clients (browsers) to receive the reply to their request. This further restricts permissions given by http_access , and is primarily intended to be used together with rep_mime_type acl for blocking different content types. icp_access : Allows neighbor caches to query your cache with ICP. miss_access : Allows certain clients to forward cache misses through your cache. This further restricts permissions given by http_access , and is primarily intended to be used for enforcing sibling relations by denying siblings from forwarding cache misses through your cache. cache : Defines responses that should not be cached. url_rewrite_access : Controls which requests are sent through the redirector pool. ident_lookup_access : Controls which requests need an Ident lookup. always_dire...

ipsec tunnel pfSense and Centos

pfSense 1.2.3 -------- external ip: 1.1.1.1 internal ip: 172.20.1.20 internal network: 172.20.1.0/24 Centos 5.5 -------- external ip: 2.2.2.2 internal ip: 172.20.2.1 internal network: 172.20.2.0/24 pfSense config from a reset. Firewall rule to allow all ipsec communication (all protocols). pfSense ipsec config -------------------- Mode: Tunnel Interface: WAN (I'm not sure this should be WAN, but changing it to LAN makes no difference) Local subnet: 172.20.1.0/24 Remote subnet: 172.20.2.0/24 Remote gateway: 2.2.2.2 Phase 1 Negotiation mode: agressive My identifier: My IP adress Encryption algorithm: 3DES Hash algorithm: SHA1 DH key group: 2 Authentication method: Pre-shared key Pre-Shared Key: secret Phase 2 Protocol: ESP Encryption algorithms: Rijndael (AES) Hash algorithms: SHA1 PFS key group: 2   Centos ipsec config ------------------- /etc/sysconfig/network-scripts/ifcfg-ipsec0 TYPE=IPSEC ...