COMMON SERVICES
Intensive Computing Centre
The Intensive Computing or High Performance Computing (HPC) is an area which includes the necessary hardware and software tools for running applications and complex techniques used in various academic and industrial fields such as hydrocarbons, pharmaceutical, medical imaging, meteorology and physics simulations. The development of the techniques of simulation has evolved in parallel with the evolution of computing capacities and parallel architectures. These developments have made it possible to attain what never was thought about before: computing capacities and solving a big amount of numerical problems in short times. Currently, the computing capacities attained by certain paralleled infrastructures if ordered by Petaflops (1015 operations per second). The computing capacity refers to the amount of computation (number of elementary operations) used per unit of time. Thus, an application is considered as intensive if its execution requires the use of a large number of computing resources so that it can operate in a reasonable time compared to the human timescale. For example, in the field of weather forecast, it would be too late to forecast next week’s climate during year of calculation.
The EMIR intensive computing unit has 35 nodes that, including:
An administrative node
Bullx R425-E3 : CPU 2 x E5-2670v2 10 cores 2.5 Ghz – RAM 64GB@1600MHz – HDD 8 x 3000GB – Graphic card nVidia K5000 – Infiniband Card QDR with a cable of IB et Ethernet network card with cables
A storage node NFS:
Bullx R423-E3 : CPU 2 x E5-2620v2 2.1Ghz – RAM 64GB@1600MHz – HDD 2 x 500GB – Infiniband Card QDR with a cable of IB, card of HBA FC 8Gbps DualPorts with 2 cables FC, Ethernet network card with cables
A display node (Post-processing):
Bullx R425-E3 : CPU 2 x E5-2670v2 10 cores 2.5 Ghz – RAM 64GB@1600MHz – HDD 8 x 3000GB – Graphic card Vidia K5000 – Infiniband Card QDR avec cable IB and Ethernet network card with cables
32 Nodes of calculation
Bullx R424-E3 : CPU 2 x E5-2670v2 10 cores 2.5 Ghz – RAM 64GB@1600MHz – HDD 500GB – Infiniband Card QDR et Ethernet network card with cables IB and Ethernet
Interconnection network:
There are two networks:
A computing network:
The nodes are connected by a 40 Gb / s InfiniBand network
An administration network:
Ethernet network Maximum performance
Theoretical capacity:
32*2*10*2.5*8 = 12, 8 TFLOPS
Tests of functionality and benchmarking:
9,1 TFLOPS
– Wien2k
– Abinit
– Quantum espresso
http://www.quantum-espresso.org/
– Java
– ASE (Atomic Simulation Environment)
– Phyto
The EMIR Intensive Computing Unit is intended for researchers, teachers and students with high performance intensive computing needs. The cluster is available permanently, and this service is accessible to all researchers at the UMS (University of Mustapha Stambouli).
Rules of use and security
Every user is supposed to undertake communication with the computing centre (hpc.emir@univ-mascara.dz) in case of changing or adding information relating to the party concerned (email, laboratory, telephone number …)
Each user is supposed to protect his account. It is, in fact, highly recommended to change the password while creating the account for the first time. The user is responsible for any damage that may result from improper use of his account.
Users get the different information from the administration of the cluster (policy changes, implementation of new services, service interruptions … is done by means of warning emails (to the email address communicated by the user when creating his account. Users agree to comply with the various directives.
It’s strictly forbidden to use pirated software. The use of commercial software should be done in compliance with the administrator, after checking the license validity.
Users with a lot of jobs (jobs) who require several hours of calculation are only allowed to make one job submission per working day.
The management node must in no case be used to perform calculations. It is intended only for compiling programs and launching jobs using SLURM.
Job submission must be done with SLURM. Any job launched without SLURM will be stopped and the user will be subject to the sanctions provided for in this charter.
It is recommended to plan the experiments in computation time during the night and in weekends.
Sanctions:
If a user does not respect the rules of use and security then:
The administrator sends a first warning email in which the concerned user is informed of the type of offense committed and of the future sanctions he will face.
In case the action is repeated, the user’s account will be deactivated for 2 weeks. If is repeated the third time, his account will be permanently deactivated.
contact
Centre de calcul intensif
hpc.emir@univ-mascara.dz