Skip to main content

Configuring the virtual path devices

We recommend that you start with a "fresh" disk configuration, so it is a good idea to delete all previously configured FC adapters and their child (disk) devices.


On node1, we checked which disks are still defined:

{node1:root}/-> lspvhdisk0 0022be2ab1cd11ac rootvg activehdisk1 0022be2a3d02ead0 Nonehdisk2 0022be2a4cbbafd8 Nonehdisk3 none NoneThese are the internal SCSI disk drives

{node1:root}/-> lscfg
grep disk+ hdisk3 U1.9-P2/Z2-A8 16 Bit LVD SCSI Disk Drive (36400 MB)+ hdisk2 U1.9-P2/Z1-A8 16 Bit LVD SCSI Disk Drive (36400 MB)+ hdisk1 U1.9-P1/Z2-A8 16 Bit LVD SCSI Disk Drive (36400 MB)+ hdisk0 U1.9-P1/Z1-A8 16 Bit LVD SCSI Disk Drive (36400 MB)In order to include the ESS disks, run the configuration manager on each node:

{node1:root}/-> cfgmgr -vSince ESS was configured with two host paths for each node (node1a and node1b), this results in two hdisks on the nodes. Actually, those two logical hdisks represent the same physical disk, accessed via the two ESS configured paths.

When the SDD driver package is installed, a third disk entry, named vpath* (virtual path) is automatically created. To benefit from the SDD capabilities, when configuring volume groups, use this virtual path disk entry instead of the regular hdisk. This provides for load balancing and high availability.

In normal mode, storage traffic is balanced over the two FC adapters installed. In case one of the two FC adapters fails for any reason, data access continues over the surviving adapter.

Check the new disk configuration discovered by cfgmgr:

Example 3-13 Listing the virtual path devices

{node1:root}/-> lsdev -Cs dpovpath0 Available Data Path Optimizer Pseudo Device Driver <-+-vpath1 Available Data Path Optimizer Pseudo Device Driver
vpath2 Available Data Path Optimizer Pseudo Device Driver
Each vpath vpath3 Available Data Path Optimizer Pseudo Device Driver
represents onevpath4 Available Data Path Optimizer Pseudo Device Driver
physical LUN.vpath5 Available Data Path Optimizer Pseudo Device Driver
vpath6 Available Data Path Optimizer Pseudo Device Driver <-+-

Example 3-14 Listing the FC attached disks (LUNs)

{node1:root}/-> lsdev -Cs fcphdisk4 Available 2V-08-01 IBM FC 2105800 <--+-hdisk5 Available 2V-08-01 IBM FC 2105800
hdisk6 Available 2V-08-01 IBM FC 2105800
hdisk7 Available 2V-08-01 IBM FC 2105800
hdisk8 Available 2V-08-01 IBM FC 2105800
hdisk9 Available 2V-08-01 IBM FC 2105800
These are the FC disks; there are onlyhdisk10 Available 2V-08-01 IBM FC 2105800
seven physical LUNs, but they are hdisk11 Available 2v-08-01 IBM FC 2105800
doubled because each node is connectedhdisk12 Available 2v-08-01 IBM FC 2105800
to the storage using two FC adapters.hdisk13 Available 2v-08-01 IBM FC 2105800
hdisk14 Available 2v-08-01 IBM FC 2105800
hdisk15 Available 2v-08-01 IBM FC 2105800
hdisk16 Available 2v-08-01 IBM FC 2105800
hdisk17 Available 2v-08-01 IBM FC 2105800 <--+-

Important: depending on the number of hdisks defined on a node prior to the ESS disk setup, the numbering of the new hdisks and vpaths may vary from one node to another. The only information you should rely on is the PVID (Physical Volume IDentifier) of the disk. This identifier is written on the disk, and retrieved by each node. It is the second column of the lspv command output.

To check the physical volumes, execute the following command:

Example 3-15 List of vpath devices created by SDD (extract)

{node1:root}/-> lspvhdisk0 0022be2ab1cd11ac rootvg activehdisk1 0022be2a3d02ead0 None...hdisk9 0022be2a31fa6b48 None...hdisk16 0022be2a31fa6b48 Nonehdisk17 none Nonevpath0 none Nonevpath1 none Nonevpath2 none Nonevpath3 none Nonevpath4 none Nonevpath5 none Nonevpath6 none None

To make these disks available to all the cluster nodes, type:

{node1:root}/-> chdev -l vpath5 -a pv=yes{node1:root}/-> lspv
grep vpath5vpath5 0022be2a31fa6b48 NoneDisk Name PVID Volume Group

Note: To benefit from high availability and load balancing of SDD, only the vpath must be used for further LVM configurations (only vpaths should be used for volume groups).

.The SDD driver provides a set of commands to manage the virtual path devices:

Example 3-16 Datapath command arguments

datapath query adapter [n] datapath query device [n] datapath set adapter online/offline datapath set device path online/offline datapath set device /( ) policy rr/fo/lb/df datapath query adaptstats [n] datapath query devstats [n] datapath open device path

To see the correspondence between the hdisks and vpaths, use the following command:

Example 3-17 Datapath query device command

{node1:root}/-> datapath query device

Total Devices : 7

DEV#: 0 DEVICE NAME: vpath0 TYPE: 2105800 SERIAL: 30022513


POLICY: Optimized

==========================================================================

Path#                       Adapter/Hard Disk                  State               Mode           Select             Errors

 0                               fscsi0/hdisk4                    OPEN              NORMAL        253908          0

1                                fscsi1/hdisk11                  OPEN               NORMAL        255351         0

DEV#: 1 DEVICE NAME: vpath1 TYPE: 2105800 SERIAL: 30122513

POLICY: Optimized

==========================================================================

Path#                    Adapter/Hard                   Disk              State         Mode       Select         Errors

0                             fscsi0/hdisk                       5               OPEN        NORMAL  1749         0

1                            fscsi1/hdisk                       12             OPEN         NORMAL   1661          0

DEV#: 2 DEVICE NAME: vpath2 TYPE: 2105800 SERIAL: 30222513

POLICY: Optimized

==========================================================================

Path#                   Adapter/Hard              Disk             State            Mode          Select        Errors

0                             fscsi0/hdisk                6               OPEN         NORMAL      1755           0

1                             fscsi1/hdisk               13              OPEN         NORMAL      1773           0

DEV#: 3 DEVICE NAME: vpath3 TYPE: 2105800 SERIAL: 30322513

POLICY: Optimized

==========================================================================

Path#              Adapter/Hard              Disk             State           Mode        Select       Errors

0                       fscsi0/hdisk              7                OPEN         NORMAL    263             0

1                       fscsi1/hdisk1            4                OPEN         NORMAL     278            0

DEV#: 4 DEVICE NAME: vpath4 TYPE: 2105800 SERIAL: 30422513

POLICY: Optimized

==========================================================================

Path#            Adapter/Hard                Disk           State          Mode           Select        Errors

0                    fscsi0/hdisk                    8               OPEN      NORMAL      1696            0

1                    fscsi1/hdisk                   15              OPEN      NORMAL       1670           0

DEV#: 5 DEVICE NAME: vpath5 TYPE: 2105800 SERIAL: 30522513

POLICY: Optimized

==========================================================================

Path#                Adapter/Hard          Disk          State        Mode           Select             Errors

0                         fscsi0/hdisk             9              CLOSE    NORMAL      1485               0

1                         fscsi1/hdisk            16             CLOSE      NORMAL       1455             0

DEV#: 6 DEVICE NAME: vpath6 TYPE: 2105800 SERIAL: 30622513

POLICY: Optimized

==========================================================================

Path#                         Adapter/Hard             Disk          State            Mode             Select             Errors

0                                   fscsi0/hdisk                 10           CLOSE           NORMAL   0                   0

1                                   fscsi1/hdisk                     17              CLOSE         NORMAL  0                 0

Comments

Popular posts from this blog

How to configure multipath Debian CentOS for IBM Storage

This detailed how to guides to achieve high availability and performance on Debian and CentOS for accessing storage space at IBM DS8300 Data Storage Systems. Tested on Debian GNU/Linux 5.x Lenny 64 bits and CentOS 5.3 64 bits running on 8 cores blades, with Host Bus Adapters Qlogic and Emulex Light Pulse Fiber Channel in deployed systems at SERPRO . Observations showed that Debian Lenny has the best performance, for our app load profile and hardware. Also, there are listed a number of previously not clearly documented critical pitfalls to avoid. STUDY whole articles, hints, implications, and cited resources before planning your deployment. Every detail matters . Before start, you must have LUNs at IBM DS8300 storage configured for high availability and performance as explained at the article How to configure maximum performance storage space for Debian GNU/Linux on IBM DS 8300 Data Storage Systems . Multipath and storage basic concepts In order t...

Squid Access Lists

Access Lists There are a number of different access lists: http_access : Allows HTTP clients (browsers) to access the HTTP port. This is the primary access control list. http_reply_access : Allows HTTP clients (browsers) to receive the reply to their request. This further restricts permissions given by http_access , and is primarily intended to be used together with rep_mime_type acl for blocking different content types. icp_access : Allows neighbor caches to query your cache with ICP. miss_access : Allows certain clients to forward cache misses through your cache. This further restricts permissions given by http_access , and is primarily intended to be used for enforcing sibling relations by denying siblings from forwarding cache misses through your cache. cache : Defines responses that should not be cached. url_rewrite_access : Controls which requests are sent through the redirector pool. ident_lookup_access : Controls which requests need an Ident lookup. always_dire...

ipsec tunnel pfSense and Centos

pfSense 1.2.3 -------- external ip: 1.1.1.1 internal ip: 172.20.1.20 internal network: 172.20.1.0/24 Centos 5.5 -------- external ip: 2.2.2.2 internal ip: 172.20.2.1 internal network: 172.20.2.0/24 pfSense config from a reset. Firewall rule to allow all ipsec communication (all protocols). pfSense ipsec config -------------------- Mode: Tunnel Interface: WAN (I'm not sure this should be WAN, but changing it to LAN makes no difference) Local subnet: 172.20.1.0/24 Remote subnet: 172.20.2.0/24 Remote gateway: 2.2.2.2 Phase 1 Negotiation mode: agressive My identifier: My IP adress Encryption algorithm: 3DES Hash algorithm: SHA1 DH key group: 2 Authentication method: Pre-shared key Pre-Shared Key: secret Phase 2 Protocol: ESP Encryption algorithms: Rijndael (AES) Hash algorithms: SHA1 PFS key group: 2   Centos ipsec config ------------------- /etc/sysconfig/network-scripts/ifcfg-ipsec0 TYPE=IPSEC ...