Multipathing on Opensolaris 2009.06-SPARC with IPStor Disks

Aus GWDG Wiki
Wechseln zu: Navigation, Suche



This short documentation should descripe how to get the Opensolaris (2009.06) Multipathing-Software
to work with the FalconStor IPStor Disk.
The test and implementation was done on Sun Fire V440, with 5 IPStor Disk.
Presented (or reached) via Qlogic-Cards and the qlc-Driver from Opensolaris (2009.06)

Read the documentation

The default steps, howto enable and configure multipath is described here:

Check if multipath software is already enabled

To do that, use


If you see disks like this:


and the double amount, then physicaly connected, you wont have multipath enabled.

Get multipath software

You will need the mpathadm package:

pkg search -r mpathadm


pkg install SUNWmpathadm

Storage Server

If you plan to make a "storage-server" out of this box:

pkg search -r storage-server


pkg install storage-server

Then you will get more stuff, like iscsi and others.

Enable multipathing on SPARC

Problem: On SPARC, multipathing is not enabled at default.
On x86 it is already.
To enable multipathing, do the following:

stmsboot -e

Configure "Third Party Disks"

As described at the Documentation above, you have to configure your Disks after enabling multipathing

Edit the scsi_vhci.conf

vi /kernel/drv/scsi_vhci.conf

Add to the end, bevor the


Section, this:

Configuration for the IPStor Disks

# GWDG IPStor Config
scsi-vhci-failover-override =
       "FALCON  IPSTOR DISK",          "f_sym";


is the fact, that you have to have 8 Signs for the Vendor (here: FALCON).
Wich means, if the Vendor hasn`t 8 Signs, you have to use the rest with blank.
Then directly the Product, which can be 16 signs long.

Configuration for the HBA in use with IPStor

You have to change some values for the HBA, if you use IPStor Disks via FC.
In this example there are QLogic Cards in use.
The Vendor Falconstor will give you details about the values for other HBA-Vendors.

Get info about the HBA

fcinfo hba-port -i

Then you will get all the needed information about the HBA on your system.

Also this command gives you details about your HBA:

mpathadm list initiator-port

If no multipathing is enabled, you will get only information like this:

Initiator Port:,4000002a00ff

Else, you can see also the WWNs of each HBA, reflected by fcinfo.

Configure the HBA for multipathing with IPStor

First, make a copy of the origin file

cd /kernel/drv
cp -rp qlc.conf qlc.conf.original

Then change this values to

HBA Values for QLA-Cards
Original Value Value for multipathing with IPStor
execution-throttle=32 execution-throttle=16
enable-target-reset-on-bus-reset=0 enable-target-reset-on-bus-reset=1
link-down-timeout=0 link-down-timeout=30
extended-logging=0 extended-logging=1
login-retry-count=4 login-retry-count=255
port-down-retry-count=8 port-down-retry-count=255

Reboot with stmsboot

After this change, you have to reboot with the stmsboot update command,
to reflect the changes:

stmsboot -u

Answer the questions, then the system will reboot

How to see if multipathing works

If you`ve had rebooted the system after this changes, you can use this, to see if it works

mpathadm list lu
               Total Path Count: 2
               Operational Path Count: 2
               Total Path Count: 2
               Operational Path Count: 2
               Total Path Count: 2
               Operational Path Count: 2
               Total Path Count: 2
               Operational Path Count: 2
               Total Path Count: 2
               Operational Path Count: 2

Or use format

      0. c0t6000D77E00005C467B2567F8DC3F7C8Ad0 <FALCON-IPSTOR DISK-v1.0>
      1. c0t6000D77E0000605F7B2567F8D36B4974d0 <FALCON-IPSTOR DISK-v1.0>
      2. c0t6000D77E0000634B7B2567F8CFF9220Fd0 <FALCON-IPSTOR DISK-v1.0>
      3. c0t6000D77E0000691E7B2567F8CC726627d0 <FALCON-IPSTOR DISK-v1.0>
      4. c0t6000D77E000062437B2567F8D7FC8195d0 <FALCON-IPSTOR DISK-v1.0>

Create a new zpool as raidz1 with multipathing devices

To create a new pool on the new devices do the following:

zpool create testpool1 raidz c0t6000D77E00005C467B2567F8DC3F7C8Ad0 c0t6000D77E0000605F7B2567F8D36B4974d0 c0t6000D77E0000634B7B2567F8CFF9220Fd0 c0t6000D77E0000691E7B2567F8CC726627d0 c0t6000D77E000062437B2567F8D7FC8195d0

To show the summary


 zpool status
zpool status
 pool: testpool1
state: ONLINE
scrub: none requested

       NAME                                       STATE     READ WRITE CKSUM
       testpool                                   ONLINE       0     0     0
         raidz1                                   ONLINE       0     0     0
           c0t6000D77E0000691E7B2567F8CC726627d0  ONLINE       0     0     0
           c0t6000D77E0000634B7B2567F8CFF9220Fd0  ONLINE       0     0     0
           c0t6000D77E0000605F7B2567F8D36B4974d0  ONLINE       0     0     0
           c0t6000D77E000062437B2567F8D7FC8195d0  ONLINE       0     0     0
           c0t6000D77E00005C467B2567F8DC3F7C8Ad0  ONLINE       0     0     0



Meine Werkzeuge