Envision Virgin Racing and Palo Alto Networks Announce Multiyear Partnership
August 5, 2021HiveNightmare | Protecting Windows 10 Security Account Manager Against CVE-2021-36934
August 5, 2021Configuring, deploying, and maintaining high-performance computing environments is a challenging and time-consuming task, but it is critical to achieving both compatibility and performance across a wide range of simulation and modeling workloads.
Organizations typically rely on in-house experts or systems integrators to manage their on-premises HPC systems. Deploying similarly optimized and compatible HPC clusters in the cloud can pose new challenges, including interfacing with new tools and APIs, and conducting extended studies to discover optimal system configurations. Google Cloud is working with its partners to simplify this experience and ease the path to HPC in the cloud.
Intel’s Select Solutions initiative naturally aligns with these efforts, because Intel Select Solutions for HPC offer compatibility with a wide range of HPC applications and provide verified real-world performance.
We are excited to introduce Intel Select Solutions for Simulation and Modeling on Google Cloud to help our customers create optimized HPC environments and realize their projects quickly.
Features of Intel Select Solution verified clusters
Beginning immediately, customers can deploy Intel Select Solutions for HPC on Google Cloud. These solutions have been engineered to satisfy Intel’s exacting software-, system-, and solution-level performance and compatibility standards for HPC workloads.
Today’s Intel Select Solution verified clusters on Google Cloud include the following features:
-
Compute-optimized (C2) instances, based on 2nd generation Intel Xeon Scalable Processors, which provide up to a 40% improvement in performance compared to previous-generation instance types due to their support for a higher clock speed of 3.8 GHz and higher memory bandwidth. These instances also feature advanced HPC features such as fixed virtual-to-physical core mapping and expose NUMA architecture to the guest OS, which are critical for high-performance MPI and OpenMP-based simulation and modeling applications.
-
Pre-tuned HPC VM images, which make it quick and easy to instantiate VMs that deliver high HPC application performance and scalability on Google Cloud. The HPC VM image implements best-practice tunings to reduce MPI ping-pong latency to single-digits of microseconds (us), and delivers small MPI messages in 10us or less. The latest HPC VM images are configured to align with the node-level requirements of the Intel HPC Platform Specification to ensure application compatibility.
-
Intel Parallel Studio XE MPI environment, which includes Intel MPI 2018, Intel Math Kernel Library (MKL), Intel Python and other runtime libraries. The Intel MPI 2018 environment is the recommended library for optimal MPI performance on Google Cloud, and Intel MKL is recommended for high-performance scientific and engineering computations.
-
Intel Cluster Checker, which helps verify the configuration and performance of HPC clusters. Cluster Checker analyzes cluster uniformity, performance characteristics, functionality, and compliance with the Intel HPC Platform Specification. It also identifies issues on clusters and provides actionable recommendations for their resolution.
Get started today!
We’ve teamed up with SchedMD on the Slurm on Google Cloud project, which makes it easy to deploy turnkey HPC environments verified as Intel Select Solutions for Simulation and Modeling in just a few minutes.
Check out our documentation and quickstart guide to start creating Intel Select Solution verified clusters using Slurm on Google Cloud today!
What’s next?
We are also working with Altair and Omnibond to develop additional turnkey Intel Select Solutions, including solutions based on Altair PBS Professional, Altair Grid Engine, and Omnibond CloudyCluster. If you’re interested in learning more about these solutions or have a requirement for additional solutions, please contact us.
Special thanks to Hongbo Lu, Jiuxing Liu, Tanner Love, Jian Yang, Pallavi Phene and Doug Jacobsen for their contributions.