Tags:
, view all tags

Available Resources

SPRACE Cluster

The SPRACE cluster is a T2 site of the CMS collaboration at the Worldwide LHC Computing Grid (WLCG) infrastructure.

General Summary

Production Time History

Date # Nodes #Cores HEP-SPEC06 TFlops (theoretical) Storage (TB Raw) Storage (TiB)
Mar/2004 22 44 113 0.233 4 3.4
Jun/2005 54 108 485 1.001 12.4 10.6
Sep/2006 86 172 1,475 2.025 12.4 10.6
Aug/2010 80 320 3,255 3.02 144 102.6
Mar/2012 80 320 3,255 3.02 504 378.6
Jun/2012 144 1088 13,698 10.085 1,044 787.0

Upgrade Time History

Date Financial Support # Nodes #Cores HEP-SPEC06 TFlops (theoretical) Storage (TB Raw) Storage (TiB)
Feb/2004 FAPESP phase I 22 44 113 0.233 4 3.4
Jun/2005 FAPESP phase II 32 64 372 0.768 8.4 7.2
Sep/2006 FAPESP phase III 32 128 990 1.024 0 0
Aug/2010 FAPESP phase IV 16 128 1,893 1.228 144 102.6
Mar/2012 CNPq Universal 0 0 0 0 360 270.0
Jun/2012 FAPESP phase V 64 768 10,443 7.065 540 414.4
Mar/2016 FAPESP phase VI 16 256 10,521   451.49 410.63
Oct/2015 FAPESP (extra fund)         180 163.7
May/2016 Huawei Partnership         384 349.4
May/2017 FAPESP (extra fund)         252 229.3
Oct/2017 FAPESP phase VII 32 1280 12,497   1008 917
Aug/2019 RNEFAE fund         640 582.3

SPRACE Current status

The SPRACE current status, after the FAPESP thematic phase V upgrade, that was installed by the end of May 2012, is

  • 13.698 KHS06 of Processing Ressources
  • 787TB of Disk Space
    • 30TB of Stage-Out Space
    • 250TB of Group Space (125 TB per group)
    • 200TB of Central Space
    • 170TB of Local Space
    • 127TB of User Space (~42 Users of 4 TB each).

Worker Nodes Summary

SPRACE has 144 worker nodes corersponding to 1088 computing cores. Those servers were bought at different times (phases), according to the evolution of the project, that started in 2004. Equipment of phase 1 were decommissioned in August 2010 because they are 32bit architecture and were no longer useful for CMS produtcion. Equipments acquired in phase V were installed in the end of May 2012, corresponding to more 64 workernodes/768 cores, enhancing significantly the computing power and storage capacity of the cluster.

Phase Vendor Model Processor Cores RAM # of nodes Total HS06 TFlops (theoretical)
II Itautec Infoserver LX210 2 x Intel Xeon EMT64T @ 3.0 GHz 2 2GB 32 372 0.768
III Itautec Infoserver LX211 2 x Intel Xeon Dual-Core 5130 @ 2.0 GHz 4 4GB 32 990 (*) 1.024
IV SGI altix xe 340 2 x Intel Xeon Quad-Core E5620 @ 2.4 GHz 8 24GB 16 1,893 (*) 1.228
V SGI steel head xe C2112 2 x Intel Xeon Hexa-Core E5-2630 @ 2.3 GHz 12 48GB 64 10,443 (**) 7.065

(*) HEPSPEC06 benchmarks from https://www.gridpp.ac.uk/wiki/HEPSPEC06. Values for Centos 5.3 - 64bit, gcc 4.1.2, and same amount fo ram memory as in our servers. For each server with 2 x Intel Xeon Dual-Core 5130 @ 2.0 GHz the HEPSPEC06 is 30.95 and for each server with 2 x Intel Xeon Quad-Core E5620 @ 2.40 GHz the HEPSPEC06 is 118.30.

(**) Estimate for the HS06 value for the processor E5-2630 based in the SPEC CINT 2006 values for the processors E5620 (~29) and E5-2630 (~40), extracted from http://www.spec.org/cpu2006/results/cint2006.html .

Storage Summary

SPRACE has a dCache based storage with 787 TiB of effecive disk space, distributed in three SunFire (48TB raw disk space) servers, five Supermicro (72TB raw disk space) servers, and 4 SGI Summit + Infinite 2245 (135TB raw disk space) servers.

Phase Vendor Model Processor Cores RAM # of servers Storage (TB Raw) Storage (TiB)
IV Sun SunFire X4540 2 x AMD Opteron Quad-Core 2384 @ 2.7 GHz 8 64GB 3 144 102.6
Univ Supermicro MBD-X8DTI-F 2 x Intel Xeon Quad-Core E5620 @ 2.4GHz 8 24GB 5 360 270.0
V SGI Summit + Infinite 2245 2 x Intel Xeon Hexa-Core E5-2630 @ 2.3 GHz 12 64GB 4 540 414.4

Head Nodes Summary

SPRACE has 5 head nodes, one for local users access (access), one for open sience grid compute element middleware (osg-ce), one for open sience grid storage element middleware (osg-se), and two for general tasks (spserv01 and spserv02).

Phase Service Vendor Model Processor Cores RAM Disk Space (TiB)
V access SGI Summit C2108 2 x Intel Xeon Hexa-Core E5-2630 @ 2.3 GHz 12 64GB 6.0 TB (RAID-5 4x2TB; 7200RPM)
V osg-ce SGI Summit C2108 2 x Intel Xeon Hexa-Core E5-2630 @ 2.3 GHz 12 64GB 1.5 TB (RAID-5 4x500GB; 7200RPM)
V osg-se SGI Summit C2108 2 x Intel Xeon Hexa-Core E5-2630 @ 2.3 GHz 12 64GB 1.5 TB (RAID-5 4x500GB; 7200RPM )
IV spserv01 SGI altix xe 270 2 x Intel Xeon Quad-Core E5620 @ 2.4 GHz 8 24GB 1.0 TB (RAID-5 3x500GB; 7200RPM)
IV spserv02 SGI altix xe 270 2 x Intel Xeon Quad-Core E5620 @ 2.4 GHz 8 24GB 3.0 TB (RAID-5 4x1TB; 7200RPM)

Network Equipment Summary

SPRACE is connected to internet by a 10 + 10 Gbps link provided by ANSP. All head nodes and storage servers are connected by a 10Gbps to the internet. All worker nodes have a 1 Gbps conection to switches with 10 Gbps uplinks to the core switch.

Phase Service Vendor Model # of Devices Description
II top of rack Dlink DGS 1224T 1 24x1Gbps ports
II top of rack 3Com 2824 1 24x1Gbps ports
III top of rack 3Com 3834 2 24x1Gbps ports
IV top of rack SMC TigerStack II - 8848M 1 48x1Gbps + 2x10Gbps ports
Finep top of rack Cisco Nexus 5010 1 20x10Gbps ports
IV core Cisco Catalyst 6506E 1 ports
V top of rack LG-Ericsson 4550G 2 48x1Gbps + 1x10Gbps ports

Decommissioned Hardware

Some machines, bought at phase I of the project, were decommissioned because they were based on a 32 bits architecture. Other machines bought at phase II were also decommissioned because their hardware warranty has expired.

Processing hardware

Phase Vendor Model Processor Cores RAM # of nodes
I Itautec Infoserver 1252 2 x Intel Xeon DP 2.4 GHz 2 1 GB 24

Storage hardware

Phase Vendor Model Processor Cores RAM # of nodes
I Dell PowerEdge 2650 2 X Intel Xeon 2.4 GHz 2 2 GB 1
II Dell PowerEdge 1850 2 X Intel Xeon 3.0 GHz 2 2 GB 1

Phase VendorSorted ascending Model Raw Disk Space # of units
I Dell PowerVault 220S 2TB 2
II Dell PowerVault 220S 4TB 2

Head Nodes hardware

Phase Service Vendor Model Processor Cores RAM Disk Space (TiB)
I old admin Itautec Infoserver 1251 2 x Intel Xeon DP 2.4 GHz 2 1GB 288GB (4x72GB; SCSI 10KRMP)
IV old access SGI altix xe 270 2 x Intel Xeon Quad-Core E5620 @ 2.4 GHz 8 24GB 3.0 TB (RAID-5 4x1TB; 7200RPM )
IV old osg-ce SGI altix xe 270 2 x Intel Xeon Quad-Core E5620 @ 2.4 GHz 8 24GB 1.0 TB (RAID-5 3x500GB; 7200RPM )
IV old osg-se SGI altix xe 270 2 x Intel Xeon Quad-Core E5620 @ 2.4 GHz 8 24GB 1.0 TB (RAID-5 3x500GB; 7200RPM)

Network Equipment

Phase Service Vendor Model # of Devices Description
I top of rack Dlink DGS 1024T 2 24x1Gbps ports
Donation core Cisco 3750 1 20x1Gbps ports

WLCG pledges

U.S. CMS Facilities

CMS T2 Associations and Allocations

WLCG pledges for 2013

According to the WLCG pledges for 2013, a nominal T2 site is

  • 10.6 KHS06 of Processing Ressources
  • 787TB of Disk Space

WLCG pledges for 2012

According to the WLCG pledges for 2012, a nominal T2 site is

  • 9.5 KHS06 of Processing Ressources
  • 787TB of Disk Space

According to the Service Credit in 2012, a nominal T2 site is

  • 10.9 KHS06 of Processing Ressources
  • 810TB of Disk Space
    • 30TB of Stage-Out Space
    • 250TB of Group Space (125TB per group)
    • 200TB of Central Space
    • 170TB of Local Space
    • 160TB of User Space (~40 Users of 4TB each).

Edit | Attach | Print version | History: r41 | r38 < r37 < r36 < r35 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r36 - 2020-01-30 - marcio
 

This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback

antalya escort bursa escort eskisehir escort istanbul escort izmir escort