Instalação do dcache nos Thors SUN X4540

Description

Acrescentando storage para o Dcache utilizando o Thor X4540 da Sun

Cabeamento dos Thors

  THOR1 THOR2 THOR3
thor port Net0 Eth1 cx7.49 cx7.50 cx7.51
porta Switch Sprace 25 26 27
  cx6.41 cx6.42 cx6.43
thor port NetMng cx7.52 cx7.53 cx7.54
porta Switch Mng GridUnesp 15 16 17
  cx1.73 cx1.74 cx1.75
thor port Net1 Eth2 cx1.05 cx1.06 cx1.07
porta Switch Sprace 7 8 9

Endereços IP e hostnames

Qual o endereço inicial do Thor ??? ----> Acesse o console serial para configurar o IP

login e senha (dãããã)

cd /SP/network
ls
set pendingipaddress=192.168.2.161
set pendinggateway=192.168.2.150
set commit pending=true

Fisicamente os Thor estão ordenados de baixo para cima, logo, o 1o thor de baixo é o No 1.

THOR hostname End. IP Net0 End. IP ILOM End. IP Hexad End. Externo
1 spstrg01 192.168.1.161 192.168.2.161 C0.A8.01.A1 200.136.80.11
2 spstrg02 192.168.1.162 192.168.2.162 C0.A8.01.A2 200.136.80.12
3 spstrg03 192.168.1.163 192.168.2.163 C0.A8.01.A3 200.136.80.13

Kickstart

/tftpboot/pxelinux.cfg

thor-install

[allan@spserv01 pxelinux.cfg]$ cat thor-install
default CentOS-5.4-install/vmlinuz
append initrd=CentOS-5.4-install/initrd.img method=http://200.145.46.3/CentOS ks=http://200.145.46.3/thor_sprace.ks ksdevice=eth1

IP decimal ---> IP Hexedecimal
192.168.1.161 = C0A801A1
192.168.1.162 = C0A801A2
192.168.1.163 = C0A801A3

CentOS-5.4-install

o ks utilizado para os thor está em http://200.145.46.3/thor_sprace.ks

ls -l
lrwxrwxrwx 1 root root 12 Mar 10 16:53 C0A801A1 -> thor-install
lrwxrwxrwx 1 root root 12 Mar 10 16:54 C0A801A2 -> thor-install
lrwxrwxrwx 1 root root 12 Mar 10 16:54 C0A801A3 -> thor-install
-rw-r--r-- 1 root root 525 Jan 7 09:36 default
-rw-r--r-- 1 root root 166 Mar 11 14:46 thor-install

Criação dos RAIDs de dados

  • Criar o layout básico dos discos

echo "
n
p
2


t
2
fd
w
" > fdisk.expect

  • Executar o fdisk:

for i in a b c d e f g h i j k l m n o p q r s t u v w x y z;do fdisk \
/dev/sd$i < fdisk.expect; done

for i in a b c d e f g h i j k l m n o p q r s t u v w x y z;do fdisk \
/dev/sda$i < fdisk.expect; done

  • Depois de criar as partições devemos reiniciar a máquina

reboot

  • Criar os RAIDs

mdadm --create /dev/md10 --force --chunk=128 --level=5 --raid-devices=6 --spare-devices=0 /dev/sda2 /dev/sdi2 /dev/sdq2 /dev/sdy2 /dev/sdag2 /dev/sdao2

mdadm --create /dev/md11 --force --chunk=128 --level=5 --raid-devices=6 --spare-devices=0 /dev/sdb2 /dev/sdj2 /dev/sdr2 /dev/sdz2 /dev/sdah2 /dev/sdap2

mdadm --create /dev/md12 --force --chunk=128 --level=5 --raid-devices=6 --spare-devices=0 /dev/sdc2 /dev/sdk2 /dev/sds2 /dev/sdaa2 /dev/sdai2 /dev/sdaq2

mdadm --create /dev/md13 --force --chunk=128 --level=5 --raid-devices=6 --spare-devices=0 /dev/sdd2 /dev/sdl2 /dev/sdt2 /dev/sdab2 /dev/sdaj2 /dev/sdar2

mdadm --create /dev/md14 --force --chunk=128 --level=5 --raid-devices=6 --spare-devices=0 /dev/sde2 /dev/sdm2 /dev/sdu2 /dev/sdac2 /dev/sdak2 /dev/sdas2

mdadm --create /dev/md15 --force --chunk=128 --level=5 --raid-devices=6 --spare-devices=0 /dev/sdf2 /dev/sdn2 /dev/sdv2 /dev/sdad2 /dev/sdal2 /dev/sdat2

mdadm --create /dev/md16 --force --chunk=128 --level=5 --raid-devices=6 --spare-devices=0 /dev/sdg2 /dev/sdo2 /dev/sdw2 /dev/sdae2 /dev/sdam2 /dev/sdau2

mdadm --create /dev/md17 --force --chunk=128 --level=5 --raid-devices=6 --spare-devices=0 /dev/sdh2 /dev/sdp2 /dev/sdx2 /dev/sdaf2 /dev/sdan2 /dev/sdav2

Benchmarking

rpm -Uvh http://download.fedora.redhat.com/pub/epel/5/i386/epel-release-5-3.noarch.rpm

yum install  xfsprogs bonnie++ 

mkfs.xfs /dev/md10 

mount /dev/md10 /mnt 
cd /mnt
mkdir teste
chown allan.allan teste
cd teste/
bonnie++ -f -n 0 -u 7833 -s 131072:131072

O 7833 é o gid para allan

Alguns comandos mdam:

mdadm --stop /dev/md10 para desmontar o raid ( antes, desmontar o ponto de montagem)

* Test A: Single disk

    • no RAID
rmount /dev/sdaa2 /mnt

  • Test B: RAID5 (6 disks)
    • One disk from each controller
mdadm --create /dev/md10 --force --chunk=128 --level=0 --raid-devices=6 --spare-devices=0 /dev/sdc2 /dev/sdk2 /dev/sds2 /dev/sdaa2 /dev/sdai2 /dev/sdaq2
  • Test C: RAID50 (48 disks)
    • A single RAID 0 with 8 RAID5, each one with 6 disks

mdadm --create /dev/md20 --force --chunk=512 --level=0 --raid-devices=8 --spare-devices=0 /dev/dm10 /dev/md11 /dev/md12 /dev/md13 /dev/dm14 /dev/md15 /dev/dm16 /dev/md17

  • Test D: RAID0 (6 disks)
    • One disk from each controller

  • Test E: RAID0 (24 disks)
    • 4 disks from each controller

  • Test F: RAID0 (48 disks)
    • all disks
mdadm --create /dev/md10 --force --chunk=128 -e 1 --level=0 --raid-devices=48 /dev/sda2 /dev/sdaa2 /dev/sdab2 /dev/sdac2 /dev/sdad2 /dev/sdae2 /dev/sdaf2 /dev/sdag2 /dev/sdah2 /dev/sdai2 /dev/sdaj2 /dev/sdak2 /dev/sdal2 /dev/sdam2 /dev/sdan2 /dev/sdao2 /dev/sdap2 /dev/sdaq2 /dev/sdar2 /dev/sdas2 /dev/sdat2 /dev/sdau2 /dev/sdav2 /dev/sdb2 /dev/sdc2 /dev/sdd2 /dev/sde2 /dev/sdf2 /dev/sdg2 /dev/sdh2 /dev/sdi2 /dev/sdj2 /dev/sdk2 /dev/sdl2 /dev/sdm2 /dev/sdn2 /dev/sdo2 /dev/sdp2 /dev/sdq2 /dev/sdr2 /dev/sds2 /dev/sdt2 /dev/sdu2 /dev/sdv2 /dev/sdw2 /dev/sdx2 /dev/sdy2 /dev/sdz2

Resultados

test sequential write (CPU) sequential read (CPU) random seek (CPU)
A 85MB/s (18%) 87MB/s (11%) 195.3MB/s (3%)
B 82MB/s (25%) 260MB/s (49%) 318.6MB/s (9%)
C 247MB/s (48%) 130MB/s (45%) 354.6MB/s (11%)
D 292MB/s (29%) 325MB/s ( 40%) 323.7MB/s (6%)
E 963MB/s (85%) 889MB/s (42%) 377.7MB/s (7%)
F 733MB/s (65%) 1153MB/s (88%) 391.5MB/s (7%)

Instalação do software para os pools:

O primeiro passo é instalar a versão do Java JDK, do site de SUN. Crie os diretórios:

mkdir /etc/grid-security
mkdir /etc/grid-security/certificates
Adicionar a seguinte linha ao /etc/fstab
osgce:/opt/osg-1.2.4/globus/TRUSTED_CA  /etc/grid-security/certificates           nfs     rw,auto,hard,bg,rsize=32768,wsize=32768,udp,nfsvers=3

O certificado para esta máquina foi solicitado à partir da osg-ce:

. /OSG/setup.sh
mkdir spstrg02
cd  spstrg02
cert-gridadmin -host spstrg02.sprace.org.br -prefix spstrg02 ca doegrids -affiliation osg -vo dosar -show -email mdias@ift.unesp.br
scp spstrg02* spstrg02.sprace.org.br:/tmp/.
Retorne à spstrg02 e instale os certificados de máquina:
mv /tmp/spstrg02cert.pem /etc/grid-security/hostcert.pem
mv /tmp/spstrg02key.pem /etc/grid-security/hostkey.pem
chown root: /etc/grid-security/hostcert.pem /etc/grid-security/hostkey.pem
chmod 400 /etc/grid-security/hostkey.pem
chmod 444 /etc/grid-security/hostcert.pem
openssl x509 -text -noout -in /etc/grid-security/hostcert.pem

Faça o download do pacote dCache-server

cd /tmp
wget http://www.dcache.org/downloads/1.9/dcache-server-1.9.5-9.noarch.rpm
Configuração:
cp /opt/d-cache/etc/dCacheSetup.template /opt/d-cache/config/dCacheSetup
As modificações feitas no arquivo acima foram:
serviceLocatorHost=osg-se.sprace.org.br
java="/usr/bin/java"
useGPlazmaAuthorizationModule=true
useGPlazmaAuthorizationCell=false
performanceMarkerPeriod=10
Outro arquivo de configuração:
cp /opt/d-cache/etc/node_config.template /opt/d-cache/etc/node_config
alterando:
vim /opt/d-cache/etc/node_config
SERVER_ID=sprace.org.br
NAMESPACE_NODE=osg-se.sprace.org.br
NODE_TYPE=pool
SERVICES=gridftp dcap gsidcap
Edite também:
vim /opt/d-cache/etc/dcachesrm-gplazma.policy
saml-vo-mapping="ON"
kpwd="ON"
saml-vo-mapping-priority="1"
kpwd-priority="2"
mappingServiceUrl="https://spserv01.sprace.org.br:8443/gums/services/GUMSAuthorizationServicePort"
Comente a linha iniciada com XACMLmappingServiceUrl. De um pool já em operação, copie os seguintes arquivos:
scp /etc/grid-security/storage-authzdb spstrg02:/tmp
scp /opt/d-cache/etc/dcache.kpwd spstrg02:/tmp
e mova na spstrg02 para os lugares adequados
mv /tmp/storage-authzdb /etc/grid-security/storage-authzdb
mv /tmp/dcache.kpw /opt/d-cache/etc/dcache.kpwd
Adicione a seguinte linha (se ela não estiver lá) ao começo do último arquivo:
version 2.1
Rode o script para instalação
/opt/d-cache/install/install.sh

A preparação dos pools é feita da seguinte forma, formatando os pools e preparando os sistemas de arquivos (verifique se o pacote = xfsprogs= está instalado):

/sbin/mkfs.xfs /dev/md1X
mkdir /raid{0,1,2,3,4,5,6,7}
for i in `seq 0 7`;  do mount /dev/md1$i /raid$i;done
/opt/d-cache/bin/dcache pool create 4654G /raid1/pool1
e assim vão sendo criados os pools /raidX/poo1 . Adicione os pools ao dCache:
 /opt/d-cache/bin/dcache pool add spstrg02_1 /raid0/pool1/
/opt/d-cache/bin/dcache pool ls
e inicie:
/opt/d-cache/bin/dcache start
-- AllanSzu - 12 Mar 2010
Topic revision: r11 - 2010-03-19 - MarcoAndreFerreiraDias
 

This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback

antalya escort bursa escort eskisehir escort istanbul escort izmir escort