Tags:
,
view all tags
---+Available Resources %TOC% ---++ SPRACE Cluster The SPRACE cluster is a T2 site of the CMS collaboration at the [[http://lcg.web.cern.ch/lcg/][Worldwide LHC Computing Grid (WLCG)]] infrastructure. ---+++General Summary | *Date* | *# Nodes* | *#Cores* | *HEP-SPEC06* | *TFlops (theoretical)* |*Storage (TB Raw)* | *Storage (!TiB)* | | *March/2012* | 80 | 320 | 3,310 | 3.02 | 504 | 372 | | *June/2012* | 144 | 1088 | 13,685 | 10.085 | 1044 | 780 | ---+++ Worker Nodes summary SPRACE has 80 worker nodes corersponding to 320 computing cores. Those servers were bought at different times (phases), according to the evolution of the project, that started in 2004. By the end of may equipments acquired in phase V will be installed, corresponding to more 64 workernodes/768 cores. By this time equipment of phase 2 will be decommissioned because they do not attend the minimum recomended amount of RAM memory per core by WLCG, which is 1.5GB/core. | *Phase* | *Vendor* | *Model* | *Processor* | *Cores* | *RAM* | *# of nodes* | *Total HS06* | *TFlops (theoretical)* | | II | Itautec | Infoserver LX210 | 2 x Intel Xeon !EMT64T @ 3.0 GHz | 2 | 2GB | 32 | 372 | 0.768 | | III | Itautec | Infoserver LX211 | 2 x Intel Xeon Dual-Core 5130 @ 2.0 GHz | 4 | 4GB | 32 | 990 (*) | 1.024 | | IV | SGI | altix xe 340SGI | 2 x Intel Xeon Quad-Core E5620 @ 2.40 GHz | 8 | 24GB | 16 | 1,948 (*) | 1.228 | | V | SGI | still head | 2 x Intel Xeon Hexa-Core E5-2630 @ 2.30 GHz | 12 | 48GB | 64 | 10,747 (**) | 7.065 | (*) HEPSPEC06 benchmarks from https://www.gridpp.ac.uk/wiki/HEPSPEC06. Values for Centos 5.3 - 64bit, gcc 4.1.2, and same amount fo ram memory as in our servers. For each server with 2 x Intel Xeon Dual-Core 5130 @ 2.0 GHz the HEPSPEC06 is 30.95 and for each server with 2 x Intel Xeon Quad-Core E5620 @ 2.40 GHz the HEPSPEC06 is 121.75. (**) Estimate for the HS06 vlaue for the processor E5-2630 based in the SPEC CINT 2006 values for the processors E5620 (~29) and E5-2630 (~40), extracted from http://www.spec.org/cpu2006/results/cint2006.html . ---+++ Storage summary SPRACE has a *dCache* based storage with *372TiB of effecive disk space*, distributed in three !SunFire (48TB raw disk space) servers, and in five Supermicro (72TB raw disk space) servers. By the end of may equipments acquired in phase V will be installed, corresponding to more four SGI (135TB raw disk space) servers. | *Phase* | *Vendor* | *Model* | *Processor* | *Cores* | *RAM* | *Disk Space* | *# of nodes* | | IV | Sun | !SunFire X4540 | 2 x AMD Opteron Quad-Core 2384 @ 2.7 GHz | 8 | 64GB | 34TiB | 3 | | IV | Supermicro | MBD-X8DTI-F | 2 x Intel Xeon Quad-Core E5620 @ 2.4GHz | 8 | 24GB | 54TiB | 5 | | V | SGI | Summit + Infinite Storage 2245 | 2 x Intel Xeon Quad-Core E5620 @ 2.4GHz | 12 | 64GB | 102TiB | 4 | ---+++ Head Nodes summary SPRACE has 5 head nodes, one for local users access (access), one for open sience grid compute element middleware (osg-ce), one for open sience grid storage element middleware (osg-se), and two for general tasks (spserv01 and spserv02). * *access.sprace.org.br* * Silicon Graphics Inc. model altix xe 270 * 02 processors Intel(R) Xeon(R) E5620 @ 2.40GHz, cache 12288 KB ( total: 8 cores ) * RAM 24GB * /dev/sda 280 GB * /dev/sdb - 2.5 TB (4x1TB !ST31000340NS) * *osg-ce.sprace.org.br* * Silicon Graphics Inc. model altix xe 270 * 02 processors Intel(R) Xeon(R) E5620 @ 2.40GHz, cache 12288 KB ( total: 8 cores ) * RAM 24GB * /dev/sda 1.0 TB (RAID-5 3x500GB !ST3500320NS) * *osg-se.sprace.org.br* * Silicon Graphics Inc. model altix xe 270 * 02 processors Intel(R) Xeon(R) E5620 @ 2.40GHz, cache 12288 KB ( total: 8 cores ) * RAM 24GB * /dev/sda 1.0 TB (RAID-5 3x500GB !ST3500320NS) * *spserv01.sprace.org.br* * Silicon Graphics Inc. model altix xe 270 * 02 processors Intel(R) Xeon(R) E5620 @ 2.40GHz, cache 12288 KB ( total: 8 cores ) * RAM 24GB * /dev/sda 1.0 TB (RAID-5 3x500GB !ST3500320NS) * *spserv02.sprace.org.br* * Silicon Graphics Inc. model altix xe 270 * 02 processors Intel(R) Xeon(R) E5620 @ 2.40GHz, cache 12288 KB ( total: 8 cores ) * RAM 24GB * /dev/sda 280 GB * /dev/sdb - 2.5 TB (4x1TB !ST31000340NS) ---+++ Decommissioned Hardware Some machines, bought at phase I of the project, were decommissioned because they were based on a 32 bits architecture. Other machines bought at phase II were also decommissioned because their hardware warranty has expired. *Processing hardware* | *Phase* | *Vendor* | *Model* | *Processor* | *Cores* | *RAM* | *# of nodes* | | I | Itautec | Infoserver 1252 | 2 x Intel Xeon DP 2.4 GHz | 2 | 1 GB | 24 | *Storage hardware* | *Phase* | *Vendor* | *Model* | *Processor* | *Cores* | *RAM* | *# of nodes* | | I | Dell | !PowerEdge 2650 | 2 X Intel Xeon 2.4 GHz | 2 | 2 GB | 1 | | II | Dell | !PowerEdge 1850 | 2 X Intel Xeon 3.0 GHz | 2 | 2 GB | 1 | | *Phase* | *Vendor* | *Model* | *Raw Disk Space* | *# of units* | | I | Dell | !PowerVault 220S | 2TB | 2 | | II | Dell | !PowerVault 220S | 4TB | 2 | ---++ WLCG pledges ---+++ WLCG pledges for 2012 [[https://cms-docdb.cern.ch/cgi-bin/DocDB/ShowDocument?docid=5935][According to the WLCG pledges for 2012]], a nominal T2 site is * 10.9 KHS06 of Processing Ressources * 810TB of Disk Space * 30TB of Stage-Out Space * 250TB of Group Space (125TB per group) * 200TB of Central Space * 170TB of Local Space * 160TB of User Space (~40 Users of 4TB each). ---+++ SPRACE status on March 2012 The current SPRACE status is * 3.31 KHS06 of Processing Ressources * 372TB of Disk Space * 20TB of Stage-Out Space * 100TB of Group Space (125 TB per group) * 100TB of Central Space * 80TB of Local Space * 72TB of User Space (~40 Users of 4 TB each). ---+++ SPRACE status after V phase After the next upgrade, corresponding to phase V, that will be installed by the end of may, the SPRACE status will be * 13.685 KHS06 of Processing Ressources * 780TB of Disk Space * 30TB of Stage-Out Space * 250TB of Group Space (125 TB per group) * 200TB of Central Space * 170TB of Local Space * 120TB of User Space (~40 Users of 4 TB each). <!-- | *Phase* | *Processor* | *Number of* | *Number of* | *SI2K* | *SI2K* | *SI2006* | *SI2006* | | *#* | *Specification* | *Nodes (WN)* |*Cores (WN)*| *per core* | *Total* |*per core*| *Total* | | I | Intel Xeon DP 2.4 GHz | 24 (22) | 50 (44) | 900 | 45,000 | 5.3 | 265 | | II | Intel Xeon !EMT64T 3.0 GHz | 33 (32) | 66 (64) | 1,350 | 89,100 | 7.9 | 521 | | III | Intel Xeon Dual-Core 2.0 GHz | 32 (29) | 128 (116) | 2,100 | 268,800 | 12.3 | 1,574 | | *Total of WN* | ** | *83* | *224* | ** | *369,600*| ** | *2,165*| | *Total* | ** | *89* | *244* | ** | *402,900*| ** | *2,360*| * [[http://www.spec.org/cpu/results/cint2000.html][SPECInt 2000]] * [[http://www.spec.org/cpu/results/res2003q2/cpu2000-20030407-02040.html][Phase I]] * [[http://www.spec.org/cpu/results/res2005q2/cpu2000-20050610-04199.html][Phase II]] * [[http://www.spec.org/cpu/results/res2006q3/cpu2000-20060626-06253.html][Phase III]] * [[http://www.spec.org/cpu2006/results/cpu2006.html][SPECInt 2006]] * Phase I and II: convertion factor [[http://www.spec.org/cpu/results/res2007q1/cpu2000-20070119-08332.html][2000]]/[[http://www.spec.org/cpu2006/results/res2007q1/cpu2006-20070119-00221.html][2006]] = 170 * [[http://www.spec.org/cpu2006/results/res2007q1/cpu2006-20070119-00221.html][Phase III]] * The unit of computing power kSI2K corresponds to one Intel Xeon 2.8 GHz processor: http://www1.jinr.ru/Pepan/2005-v36/v-36-1/pdf/v-36-1_02.pdf * Giga Flop: Xeon processors execute 2 floating point operations per clock cycle. A Xeon with 2.4 GHz is able to execute up to 4.8 billions of floating point operations per second: 2 operations / clock cycle X 2.4 x 10^9 clock cycles / sec = 4.8 GFlops * Compare with the US CMS Tier-2 site capacity: http://t2.unl.edu/uscms/current-us-cms-tier-2-site-capacity-1-24-07/ * See also the LHC Computing Grid Tier 2 Centres: http://lcg.web.cern.ch/lcg/C-RRB/Tier-2/ CMS Tier-1 * CMS Technical Design Report CERN-LHCC-2005-023 (CMS TDR) 20/June/2005 | *Tier-1* | *2007* | *2008* | *2009* | *2010* | | CPU (MSi2k) | 1.3 | 2.5 | 3.5 | 6.8 | | Disk (PB) | 0.3 | 1.2 | 1.7 | 2.6 | | Tape (PB) | 0.6 | 2.8 | 4.9 | 7.0 | | WAN (Gbps) | 3.6 | 7.2 | 10.7 | 16.1 | * *Computing* * WLCG: http://lcg.web.cern.ch/LCG/ * Management: http://lcg.web.cern.ch/LCG/proj_structure.htm * Resources: http://lcg.web.cern.ch/LCG/resources.htm *(see tables)* * CMS Computing: http://cms.cern.ch/iCMS/jsp/page.jsp?mode=cms&action=url&urlkey=CMS_COMPUTING * Organization: http://lucas-nice.web.cern.ch/lucas-nice/cpt/2008-02-07Offline-Computing-Organigram.pdf -->
Edit
|
Attach
|
P
rint version
|
H
istory
:
r41
|
r21
<
r20
<
r19
<
r18
|
B
acklinks
|
V
iew topic
|
Raw edit
|
More topic actions...
Topic revision: r19 - 2012-06-13
-
SergioLietti
Home
Site map
Main web
Sandbox web
TWiki web
Main Web
Users
Groups
Index
Search
Changes
Notifications
RSS Feed
Statistics
Preferences
View
Raw View
Print version
Find backlinks
History
More topic actions
Edit
Raw edit
Attach file or image
Edit topic preference settings
Set new parent
More topic actions
Account
Log In
Copyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki?
Send feedback