HP Scalable File Share User's GuideG3.1-0HP Part Number: SFSUGG31-EPublished: June 2009Edition: 5
{} The contents are required in syntax. If the contents are a listseparated by |, you must choose one of the items... The preceding element can be re
For SFS Gen 3 Cabling Tables, see: http://docs.hp.com/en/storage.html and click the Scalable FileShare (SFS) link.For SFS V2.3 Release Notes, see:HP S
1 What's In This Version1.1 About This ProductHP SFS G3.1-0 uses the Lustre File System on MSA2000fc hardware to provide a storage systemfor stan
Table 1-1 Supported Configurations (continued)SupportedComponentSAS, SATAStorage Array Drives8.10 and laterProLiant Support Pack (PSP)1 CentOS 5.2 is
Figure 1-1 Platform Overview1.3 Supported Configurations 15
Figure 1-2 Server PairsFigure 1-2 shows typical wiring for server pairs.1.3.1.1 Fibre Channel Switch ZoningIf your Fibre Channel is configured with a
so will limit or eliminate user access to the servers, thereby reducing potential security threatsand the need to apply security updates. For informat
2 Installing and Configuring MSA ArraysThis chapter summarizes the installation and configuration steps for MSA2000fc arrays usee inHP SFS G3.1-0 syst
© Copyright 2009 Hewlett-Packard Development Company, L.P.Confidential computer software. Valid license from HP required for possession, use or copyin
2.3.2 Creating New VolumesTo create new volumes on a set of MSA2000 arrays, follow these steps:1. Power on all the MSA2000 shelves.2. Define an alias.
• MSA2212fc ControllerDisks are identified by SCSI ID. The first enclosure has disk IDs 0-11, the second has16-27, the third has 32-43, and the fourth
a. Create vdisks in the MGS and MDS array. The following example assumes the MGSand MDS do not have attached disk enclosures and creates one vdisk for
3 Installing and Configuring HP SFS Software on ServerNodesThis chapter provides information about installing and configuring HP SFS G3.1-0 software o
3.1 Supported FirmwareFollow the instructions in the documentation which was included with each hardware componentto ensure that you are running the l
3.2 Installation RequirementsA set of HP SFS G3.1-0 file system server nodes should be installed and connected by HP inaccordance with the HP SFS G3.1
## Template ADD network --bootproto static --device %{prep_ext_nic} \--ip %{prep _ext_ip} --netmask %{prep_ext_net} --gateway %{prep_ext_gw} \--hostna
Please insert the HP SFS G3.1-0 DVD and enter any key to continue:After you insert the HP SFS G3.1-0 DVD and press enter, the Kickstart installs the H
3.3.3 Network Installation ProcedureAs an alternative to the DVD installation described above, some experienced users may chooseto install the softwar
3.4.1 Patch Download and Installation ProcedureTo download and install HP SFS patches from the ITRC website, follow this procedure:1. Create a tempora
Table of ContentsAbout This Document...9Intended
Description: Node Port1 Port2 Sys image GUIDs: 001a4bffff0cd124 001a4bffff0cd125 001a4bffff0cd126 001a
3.5.3 Configuring pdshThe pdsh command enables parallel shell commands to be run across the file system cluster.The pdsh RPMs are installed by the HP
3.5.6 Verifying Digital Signatures (optional)Verifying digital signatures is an optional procedure for customers to verify that the contents ofthe ISO
IMPORTANT: All existing file system data must be backed up before attempting an upgrade.HP is not responsible for the loss of any file system data dur
• /etc/modprobe.conf• /etc/ntp.conf• /etc/resolv.conf• /etc/sysconfig/network• /etc/sysconfig/network-scripts/ifcfg-ib0• /etc/sysconfig/network-script
10. Edit the newly created cib.xml files for each failover pair and increase the value ofepoch_admin to be 1 larger than the value listed in the activ
36
4 Installing and Configuring HP SFS Software on ClientNodesThis chapter provides information about installing and configuring HP SFS G3.1-0 software o
Configure the selected Ethernet interface with an IP address that can access the HP SFS G3.1-0server using one of the methods described in “Configurin
7. Repeat steps 1 through 6 for additional client nodes, using the appropriate node replicationor installation tools available on your client cluster.
4 Installing and Configuring HP SFS Software on Client Nodes...374.1 Installation Requirements...
1. Install the Lustre source RPM as provided on the HP SFS G3.1-0 software tarball in the/opt/hp/sfs/SRPMS directory. Enter the following command on o
5 Using HP SFS SoftwareThis chapter provides information about creating, configuring, and using the file system.5.1 Creating a Lustre File SystemThe f
To see the multipath configuration, use the following command. Output will be similar to theexample shown below:# multipath -llmpath7 (3600c0ff000d547
node3,options lnet networks=o2ib0,/dev/mapper/mpath6,/mnt/ost4,ost,testfs,icnode1@o2ib0:icnode2@o2ib0 ,,,,"_netdev,noauto",icnode4@o2ib0
2. Start the file system manually and test for proper operation before configuring Heartbeatto start the file system. Mount the MGS mount-point on the
implementation sends these messages using IP multicast. Each failover pair uses a different IPmulticast group.When a node determines that its partner
# gen_hb_config_files.pl -i ilos.csv -v -e -x testfs.csvDescriptions are included here for reference, or so they can be generated by hand if necessary
node6 Filesystem::/dev/mapper/mpath8::/mnt/ost13::lustre node6 Filesystem::/dev/mapper/mpath9::/mnt/ost14::lustre node6 Filesystem::/dev/mapper/mpath1
5.2.5 Starting HeartbeatIMPORTANT: You must start the Lustre file system manually in the following order; MGS,MDT, OST, and verify proper file system
The destination host name is optional but it is important to note that if it is not specified,crm_resource forces the resource to move by creating a r
A HP SFS G3 Performance...59A.1 Benchmark Platform...
4. Start the Heartbeat service on the remaining OSS nodes:# pdsh -w oss[1-n] service heartbeat start5. After the file system has started, HP recommend
Use the following command to show the Lustre network connections that the node is aware of,some of which might not be currently active.# cat /proc/sys
# debugfs -c -R 'dump CONFIGS/testfs-client /tmp/testfs-client' /dev/mapper/mpath0debugfs 1.40.7.sun3 (28-Feb-2008) /dev/mapper/mpath0: cata
c. To prevent the file system components and the Heartbeat service from automaticallystarting on boot, enter the following command:# pdsh -a chkconfig
# lfs check mdstestfs-MDT0000-mdc-ffff81012833ec00 activeUse the following command to check OSTs or servers for both MDS and OSTs. This will showthe L
6 LicensingA valid license is required for normal operation of HP SFS G3.1-0. HP SFS G3.1-0 systems arepreconfigured with the correct license file at
1. Stop Heartbeat on the MGS and the MDS.2. Copy the license file into /var/flexlm/license.lic on the MGS and the MDS.3. Run the following command on
7 Known Issues and WorkaroundsThe following items are known issues and workarounds.7.1 Server RebootAfter the server reboots, it checks the file syste
NOTE: Use the appropriate device in place of /dev/mapper/mpath?b. For example, if the --dryrun command returned:Parameters: mgsnode=172.31.80.1@o2ib m
A HP SFS G3 PerformanceA.1 Benchmark PlatformPerformance data in this appendix is based on HP SFS G3.0-0. Performance analysis of HP SFSG3.1-0 is not
List of Figures1-1 Platform Overview...
The Lustre servers were DL380 G5s with two quad-core processors and 16 GB of memory, runningRHEL v5.1. These servers were configured in failover pairs
Figure A-3 shows single stream performance for a single process writing and reading a single 8GB file. The file was written in a directory with a stri
The test shown in Figure A-5 did not use direct I/O. Nevertheless, it shows the cost of client cachemanagement on throughput. In this test, two proces
Figure A-6 Multi-Client Throughput ScalingIn general, Lustre scales quite well with additional OSS servers if the workload is evenlydistributed over t
A.4 One Shared FileFrequently in HPC clusters, a number of clients share one file either for read or for write. Forexample, each of N clients could wr
Another way to measure throughput is to only average over the time while all the clients areactive. This is represented by the taller, narrower box in
For workloads that require a lot of disk head movement relative to the amount of data moved,SAS disk drives provide a significant performance benefit.
IndexSymbols/etc/hosts fileconfiguring, 3010 GigEconfiguring, 3010 GigE clients, 3710 GigE installation, 29Bbenchmark platform, 59Ccache limit, 62cib.
Sscaling, 62server security policy, 16shared files, 64stonewalling, 64stonith, 45support, 10Tthroughput scaling, 62Uupgrade installation, 32upgradescl
List of Tables1-1 Supported Configurations ...
8
About This DocumentThis document provides installation and configuration information for HP Scalable File Share(SFS) G3.1-0. Overviews of installing a
Kommentare zu diesen Handbüchern