Skip to main content

IBM SAN Multipath installation on Red Hat Enterprise Linux 5.X

This section describes the steps for installing Red Hat Enterprise Linux 5.2 on a multipath device.


1.Determine which multipath device your machine boots from. To do this, note down the LUN (Logical Unit) number of the bootable device that is made available by your Fibre Channel Adapter card during firmware boot.

In the test environment, the firmware displays the boot disk, showing lun number 0.

 
 
 
 
 
 
 
 
 
 
 
 
 
2.Start the installation by providing the keyword mpath in the kernel command line. In the test environment, linux mpath vnc was used.


The Partitioning screen displays a list of multipath devices as mapper/mpath*.

3. Determine the multipath device that corresponds to the bootable device for the host being installed. Go back to the console window if using VNC, or start an alternate console. In the test environment, the System x® is hooked up to an RSA. In the remote control console, we pressed Ctrl-Alt-F2 for an alternate console.

From the output, we found the device that corresponds to the bootable LUN.

In the test environment, System x is connected to an RSA. In the console, we ran the command:

multipath -ll
grep -E ':
mpath'The following output was displayed:




From the output of the command as shown above, dm-6, the fourth multipath device listed, corresponds to LUN 0. First, look for the numbers in x:x:x:x format. The devices with x:x:x:0 are the devices correspond to LUN 0. In this case, dm-6, with the two multipaths 0:0:0:0 and 0:0:1:0, is the bootable multipath device of the host. mapper/mpath4, which corresponds to dm-6, is then the bootable multipath device for installation.




4. Continue the install by going back to your VNC session, or switching back to the console window if installing through text mode. In the test environment, we pressed Ctrl-Alt-F1 in the remote control console.

5.Select the appropriate multipath device for installation. In the test environment, mapper/mpath4 is the only device checked.




 
 
 
 
 
 
 
 
 
 
 

6. Select Review and modify partitioning layout and then press Next.

7. Boot loader might not selected by default to install into the bootable multipath device. In the test environment, the boot loader defaulted to /dev/mapper/mpath0, as shown in the following screen. Note the first line of the screen:

The GRUB boot loader will be installed on /dev/mapper/mpath0.

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

8. If the installer did not select the correct multipath bootable device, change where the boot loader is installed by selecting Configure advanced boot loader options and press Next.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Repeat this step until the bootable mulitpath device is on top of the list.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

11. Press OK.

12. The bootable device is now selected for boot loader install. As in the test environment, note that /dev/mapper/mpath4 is selected.
 
13.Continue and complete the installation.


Note: If you are a different IBM® supported multipath storage, additional steps are needed to finish the installation. See Additional multipath configuration information for IBM storage for more details.

14. Reboot the system.

15.Verify that the firmware boot sequence is properly configured to boot from the correct multipath device.

In the test environment, we pressed F1 in the BIOS screen (where the IBM eServer logo appears) to go to Configuration/Setup menu. Choose Start Options and then Startup Sequence Options. Change the option to have Hard Disk 0 as the first startup device to boot from the multipath device (mpath4).



16. Follow these steps to verify the installation was successful:

a.Run df and cat /proc/swaps to verify that the correct partitions are in a multipath device. In the test environment, the following partitions are displayed: root, /boot, and swap as /dev/mapper/mpath4p3, /dev/mapper/mpath4p1, and /dev/mapper/mpath4p2 respectively. These partitions are installed correctly to use the Device Mapper (DM) Multipath feature of Linux®.
 
 
 
 
 
 
 
 
 
 
 
 
 
 

b.Run multipath -ll to verify that the specific multipath device has as many paths as were configured. In the test environment, two paths are displayed, verifying that all the paths are properly configured.

c.If installed on LVM, run dmsetup ls and demsetup table to verify that the LVM volumes are created as linear devices on top of the chosen multipath device.
 

Comments

Popular posts from this blog

How to configure multipath Debian CentOS for IBM Storage

This detailed how to guides to achieve high availability and performance on Debian and CentOS for accessing storage space at IBM DS8300 Data Storage Systems. Tested on Debian GNU/Linux 5.x Lenny 64 bits and CentOS 5.3 64 bits running on 8 cores blades, with Host Bus Adapters Qlogic and Emulex Light Pulse Fiber Channel in deployed systems at SERPRO . Observations showed that Debian Lenny has the best performance, for our app load profile and hardware. Also, there are listed a number of previously not clearly documented critical pitfalls to avoid. STUDY whole articles, hints, implications, and cited resources before planning your deployment. Every detail matters . Before start, you must have LUNs at IBM DS8300 storage configured for high availability and performance as explained at the article How to configure maximum performance storage space for Debian GNU/Linux on IBM DS 8300 Data Storage Systems . Multipath and storage basic concepts In order t...

Squid Access Lists

Access Lists There are a number of different access lists: http_access : Allows HTTP clients (browsers) to access the HTTP port. This is the primary access control list. http_reply_access : Allows HTTP clients (browsers) to receive the reply to their request. This further restricts permissions given by http_access , and is primarily intended to be used together with rep_mime_type acl for blocking different content types. icp_access : Allows neighbor caches to query your cache with ICP. miss_access : Allows certain clients to forward cache misses through your cache. This further restricts permissions given by http_access , and is primarily intended to be used for enforcing sibling relations by denying siblings from forwarding cache misses through your cache. cache : Defines responses that should not be cached. url_rewrite_access : Controls which requests are sent through the redirector pool. ident_lookup_access : Controls which requests need an Ident lookup. always_dire...

ipsec tunnel pfSense and Centos

pfSense 1.2.3 -------- external ip: 1.1.1.1 internal ip: 172.20.1.20 internal network: 172.20.1.0/24 Centos 5.5 -------- external ip: 2.2.2.2 internal ip: 172.20.2.1 internal network: 172.20.2.0/24 pfSense config from a reset. Firewall rule to allow all ipsec communication (all protocols). pfSense ipsec config -------------------- Mode: Tunnel Interface: WAN (I'm not sure this should be WAN, but changing it to LAN makes no difference) Local subnet: 172.20.1.0/24 Remote subnet: 172.20.2.0/24 Remote gateway: 2.2.2.2 Phase 1 Negotiation mode: agressive My identifier: My IP adress Encryption algorithm: 3DES Hash algorithm: SHA1 DH key group: 2 Authentication method: Pre-shared key Pre-Shared Key: secret Phase 2 Protocol: ESP Encryption algorithms: Rijndael (AES) Hash algorithms: SHA1 PFS key group: 2   Centos ipsec config ------------------- /etc/sysconfig/network-scripts/ifcfg-ipsec0 TYPE=IPSEC ...