New User Documentation Website
We are proud to announce the launch of our new user documentation website!
We are proud to announce the launch of our new user documentation website!
Starting from Monday, 28th of August and for a period of two days, the Izar cluster will be unavailable to the users due to the migration from RHEL 7 to RHEL 8.
The downfall vulnerability, identified as CVE-2022-40982, enables a user to access and steal data from other users who share the same computer. It is found in most Intel CPUs starting from the 6th generation (Skylake) up to the 11th generation (Tiger Lake) included. For instance, a malicious app obtained from an app store could use the Downfall attack to steal sensitive information like passwords, encryption keys, and private data such as banking details, personal emails, and messages.
This morning, around 6h30, the Jed frontend was turned off due to an unexpected power issue on the direct power line. Because of this, we had to reboot the frontend, which may have caused some loss of connections.
All the running jobs as well as the ones in the queue were not affected.
We are currently investigating the cause of this problem. We apologize for the inconvenience it may have caused you.
Have you ever logged into a system and been greeted by a welcoming message? That's the Message of the Day, or MOTD for short. At SCITAS, we use the MOTD on every cluster for providing important information and updates to our users. Recently, we have revamped our MOTD to make it even more informative and visually appealing.
Historically, SCITAS has only billed for the fraction of the CPU that is allocated to a simulation. While this metric often provides valuable insights into the processing power required by a user, it does not fully capture the complete picture of resource consumption. For example, some simulations may require a lot of memory, while performing their task serially, i.e. using only one CPU core. This leads to situations where a certain fraction of the node is allocated for this particular job, while it is billed only based on one CPU core. This created an unfair situation between the users as the billing did not take into account the actual fraction of the compute node that is utilized.