End of the annual maintenance
We are pleased to announce the successful completion of our annual maintenance. As of today, February 19, 2024, all SCITAS services have been fully restored and are now operational.
We are pleased to announce the successful completion of our annual maintenance. As of today, February 19, 2024, all SCITAS services have been fully restored and are now operational.
This communication is of significant importance and may affect your work. We strongly recommend dedicating time to thoroughly read its content.
We are approaching our forthcoming annual maintenance period, scheduled from February 5 to February 19, 2024. This maintenance is essential for enhancing our services and includes the following key upgrades:
As 2023 comes to an end, we wanted to share with you this short message summarizing this (almost) past year. From the computational point of view, 2023 was outstanding! The service was used by 1200 unique users coming from 135 labs. Around 131 million core-hours and 236'000 GPU-hours were used on the machines. Among them, 7.2 million core-hours and 19'000 GPU-hours were dedicated to students and the 19 courses that used our infrastructures.
We are proud to announce the launch of our new user documentation website!
Have you ever logged into a system and been greeted by a welcoming message? That's the Message of the Day, or MOTD for short. At SCITAS, we use the MOTD on every cluster for providing important information and updates to our users. Recently, we have revamped our MOTD to make it even more informative and visually appealing.
Historically, SCITAS has only billed for the fraction of the CPU that is allocated to a simulation. While this metric often provides valuable insights into the processing power required by a user, it does not fully capture the complete picture of resource consumption. For example, some simulations may require a lot of memory, while performing their task serially, i.e. using only one CPU core. This leads to situations where a certain fraction of the node is allocated for this particular job, while it is billed only based on one CPU core. This created an unfair situation between the users as the billing did not take into account the actual fraction of the compute node that is utilized.