Skip to main content

Configuring RAC logical disks AIX

There are two FC protocols, depending on whether an FC switch (fabric configuration) or an FC HUB (arbitrated loop) is used:


The nodes are connected directly to the ESS, excluding any network equipment. Choose Arbitrated Loop (al protocol). This is the default value.

The nodes are connected to a SAN, using a switch. Select Point-to-Point (pt2pt protocol).

Because our platform is using a switched SAN, the point-to-point protocol is selected on all nodes and ESS Fibre Channel adapters.

To view the actual value set on the FC adapter:

{node3:root}/-> lsattr -El fcs0
grep init_linkinit_link al INIT Link flags TrueTo change this value, use "smitty devices", select the FC Adapter menu, and change the field INIT Link flags. Or, you can use the command:

{node3:root}/-> chdev -l fcs0 -a init_link=pt2pt

If you are unsure about the way your nodes are connected to the disk subsystems, choose init_link=al. This setting first tries to detect a switch. If that fails, it changes to arbitrated loop. It is like an auto detect feature. The FC adapter will recognize the device it is connected to, that is, FC hub or switch.

Check whether all fiber cables are connected (nodes to FC switch, FC switch to ESS). This step assumes that the ESS storage configuration has been performed and checked.

For consistency, we recommend that you remove any previous definition of the FC adapters and their child devices, then run a fresh cfgmgr. On each node and for each FC adapter, issue:

{node3:root}/-> rmdev -Rdl fcs0

{node3:root}/-> cfgmgr

Comments

Popular posts from this blog

How to configure multipath Debian CentOS for IBM Storage

This detailed how to guides to achieve high availability and performance on Debian and CentOS for accessing storage space at IBM DS8300 Data Storage Systems. Tested on Debian GNU/Linux 5.x Lenny 64 bits and CentOS 5.3 64 bits running on 8 cores blades, with Host Bus Adapters Qlogic and Emulex Light Pulse Fiber Channel in deployed systems at SERPRO . Observations showed that Debian Lenny has the best performance, for our app load profile and hardware. Also, there are listed a number of previously not clearly documented critical pitfalls to avoid. STUDY whole articles, hints, implications, and cited resources before planning your deployment. Every detail matters . Before start, you must have LUNs at IBM DS8300 storage configured for high availability and performance as explained at the article How to configure maximum performance storage space for Debian GNU/Linux on IBM DS 8300 Data Storage Systems . Multipath and storage basic concepts In order t...

Six Linux softphone's list

VoIP has improved a lot since its first days, today a lot of multinational business are using it as a reliable way to keep stay in touch. With more and more telecommuters, and business man working, and with cell phone roaming costs still high, VoIP is a real option for both big and small corporations. And when you talk to a CFO about investing to save costs, you usually will see the checkered flag for your project. Most of these projects will involve IP phones or ATAs, but you may use softphones too, which are very convenient for the traveler guy, here we will review some of the soft phones available for Linux. Linphone Linphone is a mature piece of software, it uses SIP, for voice and video over IP, it can work as a stand alone application, and you can call other sip enabled devices just entering its ips on the dial window of Linphone, or you can configure it to use an Asterisk PBX. There are binary packages for the most common distros, so try to install it using you pa...

Configuring the virtual path devices

We recommend that you start with a "fresh" disk configuration, so it is a good idea to delete all previously configured FC adapters and their child (disk) devices. On node1, we checked which disks are still defined: {node1:root}/-> lspvhdisk0 0022be2ab1cd11ac rootvg activehdisk1 0022be2a3d02ead0 Nonehdisk2 0022be2a4cbbafd8 Nonehdisk3 none NoneThese are the internal SCSI disk drives {node1:root}/-> lscfg grep disk+ hdisk3 U1.9-P2/Z2-A8 16 Bit LVD SCSI Disk Drive (36400 MB)+ hdisk2 U1.9-P2/Z1-A8 16 Bit LVD SCSI Disk Drive (36400 MB)+ hdisk1 U1.9-P1/Z2-A8 16 Bit LVD SCSI Disk Drive (36400 MB)+ hdisk0 U1.9-P1/Z1-A8 16 Bit LVD SCSI Disk Drive (36400 MB)In order to include the ESS disks, run the configuration manager on each node: {node1:root}/-> cfgmgr -vSince ESS was configured with two host paths for each node (node1a and node1b), this results in two hdisks on the nodes. Actually, those two logical hdisks represent the same physical disk, accessed via the two ...