terug

Storage

Onze installatie gaat storage nodig hebben. Aan de ene kant is er ruimte nodig voor de virtuele machine en het management daaromheen. Aan de andere kant is er ruimte nodig voor de database-opslag.
De beide storage behoeftes hebben hun eigen eigenschappen. Zo is het voor de VM's belangrijk dat de images gemakkelijk toegankelijk zijn. Voor de database opslag is maximale performance belangrijk.

NFS

We gaan Harry inrichten als centrale NFS server. Voor de VM hosts exporteert het dan het /OVS filesysteem. Hierop staan alle templates en images van virtuele machines. Dit heeft voordelen en nadelen: aan de ene kant worden alle images op een centrale plek bijgehouden en wordt het mogelijk om bijvoorbeeld binnen een serverpool nodes van een VM host naar een andere te migreren, aan de andere kant maken we geen gebruik van de lokale storage die op elke node aanwezig is (snel).

NFS server - Harry

Om te beginnen gaan we een filesysteem van 15 GB (initieel) aanmaken op de server.

lvcreate -L15G -n ovslv datavg
jfs_mkfs -L 20120405 /dev/datavg/ovslv
mkdir -p /srv/oracle/ovs
echo "/dev/datavg/ovslv  /srv/oracle/ovs  jfs  defaults,noatime 1 2" >> /etc/fstab
mount /srv/oracle/ovs
groupadd -g 501 oracle
chgrp oracle /srv/oracle/ovs
chmod 770 /srv/oracle/ovs

  1. Maak een logical volume "ovslv" van 15 GB in de volume group "datavg"
  2. Creeer een JFS filesysteem op de LV
  3. Maak een mountpoint op het filesysteem om het LV op te mounten
  4. Voeg de configuratie voor dit mountpoint toe
  5. Mount het filesysteem
  6. Maak een group "oracle"
  7. Het nieuwe mountpoint is eigendom van de group oracle
  8. Niemand anders mag erop schrijven

This filesystem will soon need to be bigger. Here's how to extend it by 40GB to a total size of 55 GB:
lvextend -L+40G datavg/ovslv
mount -o remount,resize /src/oracle/ovs
Exporteer dit filesysteem met NFS aan alle VM hosts. In dit geval alleen aan Humpty.

echo "/srv/oracle/ovs  humpty(rw,sync,subtree_check,root_squash,anonuid=501,anongid=501)" >> /etc/exports
chmod +x /etc/rc.d/rc.nfsd
/etc/rc.d/rc.nfsd start

  1. Configureer een NFS export.
    • Lezen + schrijven
    • Geen write-caching
    • Check subtree op renames (default)
    • Remap requests van root@humpty naar een lokale anonymous user
    • Id van de anonymous user is 501
    • Group id van de anonymous user is 501
  2. Configureer dat NFS wordt opgestart bij het booten van de server (de server draait Slackware: dit is de traditionele manier)
  3. Start de NFS server op

NFS client - Humpty

Mount op Humpty het geexporteerde filesysteem van Harry

rm -rf /OVS/*
echo "harry:/srv/oracle/ovs /OVS  nfs  defaults,noauto,noatime  1 3" >> /etc/fstab
mount /OVS
mkdir /OVS/Repositories
mkdir /OVS/running_pool

  1. Verwijder inhoud van bestaande (lege) /OVS
  2. Maak een lokaal mountpoint aan
  3. Voeg de configuratie van het filesysteem toe
  4. Mount het filesysteem
  5. Herstel de lege directory die verwijderd is
  6. Maak de directory voor actieve VM images

[todo: de owner van de lege directory is nu 501:501. Is dit nog recht te trekken naar 0:0, ook als het op de NFS server 501:501 is? ]

SAN

SAN multipath A RAC database consists of a number of database nodes that all have access to the same storage. A Storage Area Network (SAN) creates a network where storage is created on a dedicated storage server and made accessible to all nodes through a dedicated, fast network.
Such a network usually consists of expensive, fast, glass fiber interconnects using minimal protocols (fibrechannel, infiniband) but can also be implemented using relatively cheap networking gear using 'simple' TCP/IP.
In my project I will be using a redundant Gbit network, that is also used for all other network traffic (it's not dedicated)

The SAN server in my project, the block marked "SAN serv" in the image on the rigth, is a normal computer that offers storage through the iSCSI protocol. In iSCSI terms, it is the "iSCSI target". Choice and installation of the iSCSI target software as well as kernel configuration can be found here.

The RAC nodes, the "VM"s in the top of the image, have to know how to connect to and use this storage. They are the "iSCSI Initiators". See this page for installation of this software and the correct kernel configuration.

The server and client are connected through the SAN. If you have enough wiring and switches, it is possible to use more than one switch to make a redundant connection between SAN server and clients. This ensures that the server and clients stay connected, even when a wire or a switch breaks / is disconnected, etc. The technique used to maintain such a redundant connection is called "Multipathing" and is described here.

iSCSI target - Harry

On Harry, three partitions are designated as storage for our RAC databases:
sda5  75 GB
sdb2  75 GB
sdc2  75 GB

Each partition is placed relatively close to the edge of each disk, this will give the highest throughput later on. Not that it really matters on a 1 GBit network, but again, it's the idea that counts.

Using targetcli, we configure this as 3 LUNs in tpg1, accessible for all virtual machines (see the iSCSI target example for more explanation).

First, tell targetcli which blockdevices it should know about:
$ targetcli
targetcli GIT_VERSION (rtslib GIT_VERSION)
Copyright (c) 2011 by RisingTide Systems LLC.
All rights reserved.
/> ls
o- / ............................................................................. [...]
  o- backstores .................................................................. [...]
  | o- fileio ....................................................... [0 Storage Object]
  | o- iblock ....................................................... [0 Storage Object]
  | o- pscsi ........................................................ [0 Storage Object]
  | o- rd_dr ........................................................ [0 Storage Object]
  | o- rd_mcp ....................................................... [0 Storage Object]
  o- iscsi .................................................................. [0 Target]
  o- loopback ............................................................... [0 Target]

/> backstores/iblock create asm1 /dev/sda5
Generating a wwn serial.
Created iblock storage object asm1 using /dev/sda5.
Entering new node /backstores/iblock/asm1

/backstores/iblock/asm1> /backstores/iblock create asm2 /dev/sdb2
Generating a wwn serial.
Created iblock storage object asm2 using /dev/sdb2.
Entering new node /backstores/iblock/asm2

/backstores/iblock/asm2> /backstores/iblock create asm3 /dev/sdc2
Generating a wwn serial.
Created iblock storage object asm3 using /dev/sdc2.
Entering new node /backstores/iblock/asm3

/backstores/iblock/asm3> ls ..
o- iblock .......................................................... [3 Storage Objects]
  o- asm1 ...................................................... [/dev/sda5 deactivated]
  o- asm2 ...................................................... [/dev/sdb2 deactivated]
  o- asm3 ...................................................... [/dev/sdc2 deactivated]

Activate the iSCSI part. This creates a target portal group tpgt1. Disable authentication for this portal group, then create a target portal (ie. bind a network address to this portal group).

/backstores/iblock/asm3> /iscsi create
Created target iqn.2003-01.org.linux-iscsi.harry.x8664:sn.b69d24636f5f.
Selected TPG Tag 1.
Successfully created TPG 1.
Entering new node /iscsi/iqn.2003-01.org.linux-iscsi.harry.x8664:sn.b69d24636f5f/tpgt1

/iscsi/iqn.20...tpgt1/> set parameter AuthMethod=None
Parameter AuthMethod is now 'None'.
/iscsi/iqn.20...tpgt1/> set attribute authentication=0
Parameter authentication is now '0'.
/iscsi/iqn.20...tpgt1/> portals/ create
Using default IP port 3260
Automatically selected IP address 10.0.0.148.
Successfully created network portal 10.0.0.148:3260.
Entering new node /iscsi/iqn.2003-01.org.linux-iscsi.harry.x8664:sn.b69d24636f5f/tpgt1/portals/10.0.0.148:3260

Oops, the storage network was supposed to be the 10.0.3.0/24 network. This portal is now bound to the 10.0.0.148 addres (the eth0 address). Add the correct portals and delete this one.

/iscsi/iqn.20....0.0.148:3260> cd ../..
/iscsi/iqn.20...tpgt1/portals> create 10.0.3.1
Using default IP port 3260
Successfully created network portal 10.0.3.1:3260.
Entering new node /iscsi/iqn.2003-01.org.linux-iscsi.harry.x8664:sn.b69d24636f5f/tpgt1/portals/10.0.3.1:3260

/iscsi/iqn.20...10.0.3.1:3260> cd ..
/iscsi/iqn.20...tpgt1/portals> create 10.0.3.2
Using default IP port 3260
Successfully created network portal 10.0.3.2:3260.
Entering new node /iscsi/iqn.2003-01.org.linux-iscsi.harry.x8664:sn.b69d24636f5f/tpgt1/portals/10.0.3.2:3260

/iscsi/iqn.20...10.0.3.2:3260> cd ..
/iscsi/iqn.20...tpgt1/portals> delete 10.0.0.148 3260
Deleted network portal 10.0.0.148:3260

Assign our three drives to this target portal group:

/iscsi/iqn.20....0.0.148:3260> cd ../..
/iscsi/iqn.20...4636f5f/tpgt1> luns/ create /backstores/iblock/asm1
Selected LUN 0.
Successfully created LUN 0.
Entering new node /iscsi/iqn.2003-01.org.linux-iscsi.harry.x8664:sn.b69d24636f5f/tpgt1/luns/lun0
/iscsi/iqn.20...gt1/luns/lun0> ../../luns/ create backstores/iblock/asm2
Selected LUN 1.
Successfully created LUN 1.
Entering new node /iscsi/iqn.2003-01.org.linux-iscsi.harry.x8664:sn.b69d24636f5f/tpgt1/luns/lun1
/iscsi/iqn.20...gt1/luns/lun1> ../../luns/ create backstores/iblock/asm3
Selected LUN 2.
Successfully created LUN 2.
Entering new node /iscsi/iqn.2003-01.org.linux-iscsi.harry.x8664:sn.b69d24636f5f/tpgt1/luns/lun2

All three LUN are made available for clients with the initiator name "iqn.2012-08.nl.rac11":

/iscsi/iqn.20...gt1/luns/lun2> cd ../..
/iscsi/iqn.20...4636f5f/tpgt1> acls/ create iqn.2012-08.nl.rac11:0010010010
Successfully created Node ACL for iqn.2012-08.nl.rac11:0010010010
Created mapped LUN 2.
Created mapped LUN 1.
Created mapped LUN 0.
Entering new node /iscsi/iqn.2003-01.org.linux-iscsi.harry.x8664:sn.b69d24636f5f/tpgt1/acls/iqn.2012-08.nl.rac11:0010010010
/iscsi/iqn.20...11:0010010010>

Get an overview of what was created and then save the configuration:

i/iscsi/iqn.20...11:0010010010> cd /
/> ls 
o- / ............................................................................ [...]
  o- backstores ................................................................. [...]
  | o- fileio ...................................................... [0 Storage Object]
  | o- iblock ..................................................... [3 Storage Objects]
  | | o- asm1 ................................................... [/dev/sda5 activated]
  | | o- asm2 ................................................... [/dev/sdb2 activated]
  | | o- asm3 ................................................... [/dev/sdc2 activated]
  | o- pscsi ....................................................... [0 Storage Object]
  | o- rd_dr ....................................................... [0 Storage Object]
  | o- rd_mcp ...................................................... [0 Storage Object]
  o- iscsi ................................................................. [1 Target]
  | o- iqn.2003-01.org.linux-iscsi.harry.x8664:sn.b69d24636f5f ................ [1 TPG]
  |   o- tpgt1 .............................................................. [enabled]
  |     o- acls ............................................................... [1 ACL]
  |     | o- iqn.2012-08.nl.rac11:0010010010 .......................... [3 Mapped LUNs]
  |     |   o- mapped_lun0 ................................................ [lun0 (rw)]
  |     |   o- mapped_lun1 ................................................ [lun1 (rw)]
  |     |   o- mapped_lun2 ................................................ [lun2 (rw)]
  |     o- luns .............................................................. [3 LUNs]
  |     | o- lun0 ........................................... [iblock/asm1 (/dev/sda5)]
  |     | o- lun1 ........................................... [iblock/asm2 (/dev/sdb2)]
  |     | o- lun2 ........................................... [iblock/asm3 (/dev/sdc2)]
  |     o- portals ......................................................... [1 Portal]
  |       o- 10.0.3.1:3260 ....................................................... [OK]
  |       o- 10.0.3.2:3260 ....................................................... [OK]
  o- loopback .............................................................. [0 Target]
/> saveconfig
WARNING: Saving harry current configuration to disk will overwrite your boot settings.
The current target configuration will become the default boot config.
Are you sure? Type 'yes': yes
Making backup of loopback/ConfigFS with timestamp: 2012-08-02_21:11:21.378676
Successfully updated default config /etc/target/loopback_start.sh
Making backup of LIO-Target/ConfigFS with timestamp: 2012-08-02_21:11:21.378676
Generated LIO-Target config: /etc/target/backup/lio_backup-2012-08-02_21:11:21.378676.sh
Making backup of Target_Core_Mod/ConfigFS with timestamp: 2012-08-02_21:11:21.378676
Generated Target_Core_Mod config: /etc/target/backup/tcm_backup-2012-08-02_21:11:21.378676.sh
Successfully updated default config /etc/target/lio_start.sh
Successfully updated default config /etc/target/tcm_start.sh
/> 

iSCSI client - VM1


back   (last change: 09-08-2012 22:11)