Tags:
, view all tags

Instalação do dcache nos Thors SUN X4540

Description

Acrescentando storage para o Dcache utilizando o Thor X4540 da Sun

Cabeamento dos Thors

  THOR1 THOR2 THOR3
thor port Net0 cx7.49 cx7.50 cx7.51
porta Switch Sprace 25 26 27
  cx6.41 cx6.42 cx6.43
thor port NetMng cx7.52 cx7.53 cx7.54
porta Switch Mng GridUnesp 15 16 17
  cx1.73 cx1.74 cx1.75

Endereços IP e hostnames

Qual o endereço inicial do Thor ???

Acesse o console serial para configurar o IP

login e senha (dãããã)

cd /SP/network
ls
set pendingipaddress=192.168.2.161
set pendinggateway=192.168.2.150
set commit pending=true

Fisicamente os Thor estão ordenados de baixo para cima, logo, o 1o thor de baixo é o No 1.

THOR hostname End. IP Net0 End. IP ILOM End. IP Hexad
1 spstrg01 192.168.1.161 192.168.2.161 C0.A8.01.A1
2 spstrg02 192.168.1.162 192.168.2.162 C0.A8.01.A2
3 spstrg03 192.168.1.163 192.168.2.163 C0.A8.01.A3

Kickstart

/tftpboot/pxelinux.cfg

thor-install

[allan@spserv01 pxelinux.cfg]$ cat thor-install default CentOS-5.4-install/vmlinuz append initrd=CentOS-5.4-install/initrd.img method=http://200.145.46.3/CentOS ks=http://200.145.46.3/thor_sprace.ks ksdevice=eth1

C0A801A1 C0A801A2 C0A801A3

192.168.1.161 = C0A801A1 192.168.1.162 = C0A801A2 192.168.1.163 = C0A801A3

CentOS-5.4-install

o ks utilizado para os thor está em http://200.145.46.3/thor_sprace.ks

ls -l

lrwxrwxrwx 1 root root 12 Mar 10 16:53 C0A801A1 -> thor-install lrwxrwxrwx 1 root root 12 Mar 10 16:54 C0A801A2 -> thor-install lrwxrwxrwx 1 root root 12 Mar 10 16:54 C0A801A3 -> thor-install -rw-r--r-- 1 root root 525 Jan 7 09:36 default -rw-r--r-- 1 root root 166 Mar 11 14:46 thor-install

Criação dos RAIDs de dados

  • Criar o layout básico dos discos

echo "
n
p
2


t
2
fd
w
" > fdisk.expect

  • Executar o fdisk:

for i in a b c d e f g h i j k l m n o p q r s t u v w x y z;do fdisk \
/dev/sd$i < fdisk.expect; done

for i in a b c d e f g h i j k l m n o p q r s t u v w x y z;do fdisk \
/dev/sda$i < fdisk.expect; done

  • Depois de criar as partições devemos reiniciar a máquina

reboot

  • Criar os RAIDs

mdadm --create /dev/md10 --force --chunk=128 --level=5 --raid-devices=6 --spare-devices=0 /dev/sda2 /dev/sdi2 /dev/sdq2 /dev/sdy2 /dev/sdag2 /dev/sdao2

mdadm --create /dev/md11 --force --chunk=128 --level=5 --raid-devices=6 --spare-devices=0 /dev/sdb2 /dev/sdj2 /dev/sdr2 /dev/sdz2 /dev/sdah2 /dev/sdap2

mdadm --create /dev/md12 --force --chunk=128 --level=5 --raid-devices=6 --spare-devices=0 /dev/sdc2 /dev/sdk2 /dev/sds2 /dev/sdaa2 /dev/sdai2 /dev/sdaq2

mdadm --create /dev/md13 --force --chunk=128 --level=5 --raid-devices=6 --spare-devices=0 /dev/sdd2 /dev/sdl2 /dev/sdt2 /dev/sdab2 /dev/sdaj2 /dev/sdar2

mdadm --create /dev/md14 --force --chunk=128 --level=5 --raid-devices=6 --spare-devices=0 /dev/sde2 /dev/sdm2 /dev/sdu2 /dev/sdac2 /dev/sdak2 /dev/sdas2

mdadm --create /dev/md15 --force --chunk=128 --level=5 --raid-devices=6 --spare-devices=0 /dev/sdf2 /dev/sdn2 /dev/sdv2 /dev/sdad2 /dev/sdal2 /dev/sdat2

mdadm --create /dev/md16 --force --chunk=128 --level=5 --raid-devices=6 --spare-devices=0 /dev/sdg2 /dev/sdo2 /dev/sdw2 /dev/sdae2 /dev/sdam2 /dev/sdau2

mdadm --create /dev/md17 --force --chunk=128 --level=5 --raid-devices=6 --spare-devices=0 /dev/sdh2 /dev/sdp2 /dev/sdx2 /dev/sdaf2 /dev/sdan2 /dev/sdav2

Benchmarking

rpm -Uvh http://download.fedora.redhat.com/pub/epel/5/i386/epel-release-5-3.noarch.rpm

yum install  xfsprogs bonnie++ 

mkfs.xfs /dev/md10 

mount /dev/md10 /mnt 
mkdir teste
chown allan.allan teste
cd teste/
bonnie++ -f -n 0 -u 7833 -s 131072:131072

O 7833 é o gid para allan

Alguns comandos mdam:

mdadm --stop /dev/md10 para desmontar o raid ( antes, desmontar o ponto de montagem)

* Test A: Single disk

    • no RAID

  • Test B: RAID5 (6 disks)
    • One disk from each controller

  • Test C: RAID50 (48 disks)
    • A single RAID 0 with 8 RAID5, each one with 6 disks

mdadm --create /dev/md20 --force --chunk=512 --level=0 --raid-devices=8 --spare-devices=0 /dev/dm10 /dev/md11 /dev/md12 /dev/md13 /dev/dm14 /dev/md15 /dev/dm16 /dev/md17

  • Test D: RAID0 (6 disks)
    • One disk from each controller

  • Test E: RAID0 (24 disks)
    • 4 disks from each controller

  • Test F: RAID0 (48 disks)
    • all disks
mdadm --create /dev/md10 --force --chunk=128 -e 1 --level=0 --raid-devices=48 /dev/sda2 /dev/sdaa2 /dev/sdab2 /dev/sdac2 /dev/sdad2 /dev/sdae2 /dev/sdaf2 /dev/sdag2 /dev/sdah2 /dev/sdai2 /dev/sdaj2 /dev/sdak2 /dev/sdal2 /dev/sdam2 /dev/sdan2 /dev/sdao2 /dev/sdap2 /dev/sdaq2 /dev/sdar2 /dev/sdas2 /dev/sdat2 /dev/sdau2 /dev/sdav2 /dev/sdb2 /dev/sdc2 /dev/sdd2 /dev/sde2 /dev/sdf2 /dev/sdg2 /dev/sdh2 /dev/sdi2 /dev/sdj2 /dev/sdk2 /dev/sdl2 /dev/sdm2 /dev/sdn2 /dev/sdo2 /dev/sdp2 /dev/sdq2 /dev/sdr2 /dev/sds2 /dev/sdt2 /dev/sdu2 /dev/sdv2 /dev/sdw2 /dev/sdx2 /dev/sdy2 /dev/sdz2

Resultados

test sequential write (CPU) sequential read (CPU) random seek (CPU)
A MB/s (%) MB/s (%) /s (%)
B 82MB/s (25%) 260MB/s (49%) 318.6MB/s (9%)
C 247MB/s (48%) 130MB/s (45%) 354.6/s (11%)
D MB/s (%) MB/s (%) /s (%)
E 963MB/s (85%) 889MB/s (42%) 377.7/s (7%)
F 733MB/s (65%) 1153MB/s (88%) 391.5/s (7%)

-- AllanSzu - 12 Mar 2010

Edit | Attach | Print version | History: r11 | r7 < r6 < r5 < r4 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r5 - 2010-03-16 - AllanSzu
 

This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback

antalya escort bursa escort eskisehir escort istanbul escort izmir escort