Skip to content

Announcements#

Kuma Cluster Full Production & Pricing – Nov 1st

We are excited to announce the successful completion of the beta testing phase for the Kuma GPU cluster, and we are preparing to enter full production starting from November 1st, 2024. Your participation in the beta phase has been invaluable, with a total of approximately 450,000 GPU hours of calculation jobs executed. This extensive testing allowed us to identify and resolve various hardware and software issues, ensuring that Kuma is largely ready for production.

Kuma Beta Opening

After a successful restricted beta with more than 80'000 jobs submitted, we are pleased to announce that Kuma, the new GPU-based cluster, is available for testing starting now! This marks an important milestone as we transition from the Izar cluster, which will soon be reassigned to educational purposes, to the much more powerful Kuma cluster. You can now connect to the login node at kuma.hpc.epfl.ch to begin testing your codes.

New Archiving Service Now Available

We are happy to announce the launch of our new archiving service, designed to provide long-term low-cost storage for your research data.

Accessible from the frontend nodes of our Izar and Jed clusters, this service utilizes a reliable magnetic tape system to ensure your data is preserved for a minimum of 10 years.

Annual SCITAS maintenance

This communication is of significant importance and may affect your work. We strongly recommend dedicating time to thoroughly read its content.

We are approaching our forthcoming annual maintenance period, scheduled from February 5 to February 19, 2024. This maintenance is essential for enhancing our services and includes the following key upgrades:

End of year retrospective

As 2023 comes to an end, we wanted to share with you this short message summarizing this (almost) past year. From the computational point of view, 2023 was outstanding! The service was used by 1200 unique users coming from 135 labs. Around 131 million core-hours and 236'000 GPU-hours were used on the machines. Among them, 7.2 million core-hours and 19'000 GPU-hours were dedicated to students and the 19 courses that used our infrastructures.

New billing scheme and usage capping

Introducing RAM memory billing: A fairer approach to compute node usage

Historically, SCITAS has only billed for the fraction of the CPU that is allocated to a simulation. While this metric often provides valuable insights into the processing power required by a user, it does not fully capture the complete picture of resource consumption. For example, some simulations may require a lot of memory, while performing their task serially, i.e. using only one CPU core. This leads to situations where a certain fraction of the node is allocated for this particular job, while it is billed only based on one CPU core. This created an unfair situation between the users as the billing did not take into account the actual fraction of the compute node that is utilized.