Available Resources

SPRACE Cluster

The SPRACE cluster is a T2 site of the CMS collaboration at the Worldwide LHC Computing Grid (WLCG) infrastructure.

General Summary

Production Time History

Date # Nodes #Cores HEP-SPEC06 TFlops (theoretical) Storage (TB Raw) Storage (TiB)
Mar/2004 22 44 113 0.233 4 3.4
Jun/2005 54 108 485 1.001 12.4 10.6
Sep/2006 86 172 1,475 2.025 12.4 10.6
Aug/2010 80 320 3,255 3.02 144 102.6
Mar/2012 80 320 3,255 3.02 504 378.6
Jun/2012 144 1088 13,698 10.085 1,044 787.0
Aug/2019 128 2688 29,700 25.353 3,364 3,060.9

Upgrade Time History

Date Financial Support # Nodes #Cores HEP-SPEC06 TFlops (theoretical) Storage (TB Raw) Storage (TiB)
Feb/2004 FAPESP phase I 22 44 113 0.233 4 3.4
Jun/2005 FAPESP phase II 32 64 372 0.768 8.4 7.2
Sep/2006 FAPESP phase III 32 128 990 1.024 0 0
Aug/2010 FAPESP phase IV 16 128 1,893 1.228 144 102.6
Mar/2012 CNPq Universal 0 0 0 0 360 327.0
Jun/2012 FAPESP phase V 64 768 10,443 7.065 540 491
Mar/2016 FAPESP phase VI 16 256 5,060 4.9 0 0
Oct/2015 FAPESP (extra fund) 0 0 0 0 180 163.7
May/2016 Huawei Partnership 0 0 0 0 384 349.4
May/2017 FAPESP (extra fund) 0 0 0 0 252 229.3
Oct/2017 FAPESP phase VII 32 1280 12,497 12.16 1008 917
Aug/2019 RENAFAE fund 0 0 0 0 640 582.3

SPRACE Current status

The SPRACE current status, after the FAPESP thematic phase VII upgrade and storage upgrade using RENAFAE fund, that was installed by the end of August 2019, is

  • 29.7 KHS06 of Processing Ressources
  • 2.3 PB of Disk Space
    • 1.9 PB Dedicated to CMS Production
    • 0.4 PB Group Space

Worker Nodes Summary

SPRACE has 128 worker nodes corresponding to 2688 computing cores. Those servers were bought at different times (phases), according to the evolution of the project, that started in 2004. Equipment of phase 1 were decommissioned in August 2010 because they are 32bit architecture and were no longer useful for CMS production. Equipments acquired in phase V were installed in the end of May 2012, corresponding to more 64 workernodes/768 cores, enhancing significantly the computing power and storage capacity of the cluster. Equipment of phases 2 and 3 were decommissioned by the end of 2015.

Phase Vendor Model Processor Cores RAM # of nodes Total HS06 TFlops (theoretical)
IV SGI altix xe 340 2 x Intel Xeon Quad-Core E5620 @ 2.4 GHz 8 24GB 16 1,893 1.228
V SGI steel head xe C2112 2 x Intel Xeon Hexa-Core E5-2630 @ 2.3 GHz 12 48GB 64 10,443 7.065
VI SGI Rackable C2112-4GP3-R-G 2 x Intel Xeon E5-2630 v3 @ 2.3 GHz 16 64GB 16 5,060 4.9
VII SGI SGI C2112-4GP2 2 x Intel Xeon E5-2630 v4 @ 2.2 GHz 20 128GB 32 12,497 12.16

Storage Summary

SPRACE has a dCache based storage with 2.3 PiB of effective disk space, distributed in five Supermicro (360TB raw disk space) servers, and four SGI Summit + Infinite 2245 (540TB raw disk space) servers, one SGI Modular InfiniteStorage (432TB raw disk space), one Huawei OceanStor (1024TB raw disk space) and two servers Dell PowerEdge R730 + MD1280 (1008TB raw). Storages of phase IV were decommissioned in January 2019.

Phase Vendor Model Processor Cores RAM # of servers Storage (TB Raw) Storage (TiB)
Univ Supermicro MBD-X8DTI-F 2 x Intel Xeon Quad-Core E5620 @ 2.4GHz 8 24GB 5 360 327
V SGI Summit + Infinite 2245 2 x Intel Xeon Hexa-Core E5-2630 @ 2.3 GHz 12 64GB 4 540 591
  SGI Modular InfiniteStore 2 x Intel Xeon CPU E5-2630 v2 @ 2.60GHz 12 64GB 1 432 393
  Huawei OceanStor 2 X Intel Xeon CPU E5-2695 v3 @ 2.30GHz 28 64GB 1 1024 931
VII Dell PowerEdge R730 + MD1280 2 X Intel Xeon CPU E5-2620 v4 @ 2.10GHz 16 128GB 2 1008 917

Head Nodes Summary

SPRACE has 5 head nodes, one for local users access (access), one for open sience grid compute element middleware (osg-ce), one for open sience grid storage element middleware (osg-se), and two for general tasks (spserv01 and spserv02).

Phase Service Vendor Model Processor Cores RAM Disk Space (TiB)
V access SGI Summit C2108 2 x Intel Xeon Hexa-Core E5-2630 @ 2.3 GHz 12 64GB 6.0 TB (RAID-5 4x2TB; 7200RPM)
V osg-ce SGI Summit C2108 2 x Intel Xeon Hexa-Core E5-2630 @ 2.3 GHz 12 64GB 1.5 TB (RAID-5 4x500GB; 7200RPM)
V osg-se SGI Summit C2108 2 x Intel Xeon Hexa-Core E5-2630 @ 2.3 GHz 12 64GB 1.5 TB (RAID-5 4x500GB; 7200RPM )
IV spserv01 SGI altix xe 270 2 x Intel Xeon Quad-Core E5620 @ 2.4 GHz 8 24GB 1.0 TB (RAID-5 3x500GB; 7200RPM)
IV spserv02 SGI altix xe 270 2 x Intel Xeon Quad-Core E5620 @ 2.4 GHz 8 24GB 3.0 TB (RAID-5 4x1TB; 7200RPM)

Network Equipment Summary

SPRACE is connected to internet by a 10 + 10 Gbps link provided by ANSP. All head nodes and storage servers are connected by a 10Gbps to the internet. All worker nodes have a 1 Gbps conection to switches with 10 Gbps uplinks to the core switch.

Phase Service Vendor Model # of Devices Description
II top of rack Dlink DGS 1224T 1 24x1Gbps ports
II top of rack 3Com 2824 1 24x1Gbps ports
III top of rack 3Com 3834 2 24x1Gbps ports
IV top of rack SMC TigerStack II - 8848M 1 48x1Gbps + 2x10Gbps ports
Finep top of rack Cisco Nexus 5010 1 20x10Gbps ports
IV core Cisco Catalyst 6506E 1 ports
V top of rack LG-Ericsson 4550G 2 48x1Gbps + 1x10Gbps ports

Decommissioned Hardware

Some machines, bought at phase I of the project, were decommissioned because they were based on a 32 bits architecture. Other machines bought at phase II were also decommissioned because their hardware warranty has expired.

Processing hardware

Phase Vendor Model Processor Cores RAM # of nodes
I Itautec Infoserver 1252 2 x Intel Xeon DP 2.4 GHz 2 1 GB 24
II Itautec Infoserver LX210 2 x Intel Xeon EMT64T @ 3.0 GHz 2 2GB 32
III Itautec Infoserver LX211 2 x Intel Xeon Dual-Core 5130 @ 2.0 GHz 4 4GB 32

Storage hardware

Phase Vendor Model Processor Cores RAM # of nodes
I Dell PowerEdge 2650 2 X Intel Xeon 2.4 GHz 2 2 GB 1
II Dell PowerEdge 1850 2 X Intel Xeon 3.0 GHz 2 2 GB 1
IV Sun SunFire X4540 2 x AMD Opteron Quad-Core 2384 @ 2.7 GHz 8 64GB 3

Phase Vendor Model Raw Disk Space # of units
I Dell PowerVault 220S 2 TB 2
II Dell PowerVault 220S 4 TB 2
IV Sun SunFire X4540 48 TB 3

Head Nodes hardware

Phase Service Vendor Model Processor Cores RAM Disk Space (TiB)
I old admin Itautec Infoserver 1251 2 x Intel Xeon DP 2.4 GHz 2 1GB 288GB (4x72GB; SCSI 10KRMP)
IV old access SGI altix xe 270 2 x Intel Xeon Quad-Core E5620 @ 2.4 GHz 8 24GB 3.0 TB (RAID-5 4x1TB; 7200RPM )
IV old osg-ce SGI altix xe 270 2 x Intel Xeon Quad-Core E5620 @ 2.4 GHz 8 24GB 1.0 TB (RAID-5 3x500GB; 7200RPM )
IV old osg-se SGI altix xe 270 2 x Intel Xeon Quad-Core E5620 @ 2.4 GHz 8 24GB 1.0 TB (RAID-5 3x500GB; 7200RPM)

Network Equipment

Phase Service Vendor ModelSorted ascending # of Devices Description
Donation core Cisco 3750 1 20x1Gbps ports
I top of rack Dlink DGS 1024T 2 24x1Gbps ports

WLCG pledges

U.S. CMS Facilities

CMS T2 Associations and Allocations

WLCG pledges for 2013

According to the WLCG pledges for 2013, a nominal T2 site is

  • 10.6 KHS06 of Processing Ressources
  • 787TB of Disk Space

WLCG pledges for 2012

According to the WLCG pledges for 2012, a nominal T2 site is

  • 9.5 KHS06 of Processing Ressources
  • 787TB of Disk Space

According to the Service Credit in 2012, a nominal T2 site is

  • 10.9 KHS06 of Processing Ressources
  • 810TB of Disk Space
    • 30TB of Stage-Out Space
    • 250TB of Group Space (125TB per group)
    • 200TB of Central Space
    • 170TB of Local Space
    • 160TB of User Space (~40 Users of 4TB each).

Topic revision: r40 - 2020-01-31 - marcio
 

This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback

antalya escort bursa escort eskisehir escort istanbul escort izmir escort