Tag

Flux

Advanced batch computing with Slurm on the Great Lakes cluster

By |

OVERVIEW

This workshop will cover some more advanced topics in cluster computing on the U-M Great Lakes Cluster. Topics to be covered include a review of common parallel programming models and basic use of Great Lakes; dependent and array scheduling; troubleshooting and analysis; a brief introduction to workflow scripting using bash; parallel processing in one or more of Python, R, and MATLAB; and parallel profiling of C and Fortran code using Allinea Performance Reports and Allinea MAP of one or more of MPI and OpenMP programs.

PRE-REQUISITES

This course assumes familiarity with the Linux command line as might be got from the CSCAR/ARC-TS workshop Introduction to the Linux Command Line. In particular, participants should understand how files and folders work, be able to create text files using the nano editor, be able to create and remove files and folders, and understand what input and output redirection are and how to use them.

INSTRUCTORS

Dr. Charles J Antonelli
Research Computing Services
LSA Technology Services

Charles is a High Performance Computing Consultant in the Research Computing Services group of LSA TS at the University of Michigan, where he is responsible for high performance computing support and education, and was an Advocate to the Departments of History and Communications. Prior to this, he built a parallel data ingestion component of a novel earth science data assimilation system, a secure packet vault, and worked on the No. 5 ESS Switch at Bell Labs in the 80s. He has taught courses in operating systems, distributed file systems, C++ programming, security, and database application design.

John Thiels
Research Computing Services
LSA Technology Services

MATERIALS

COURSE PREPARATION

In order to participate successfully in the workshop exercises, you must have a Great Lakes user account, a Great Lakes job account (one is created for each workshop), and be enrolled in Duo. The user account allows you to log in to the cluster, create, compile, and test applications, and prepare jobs for submission. The job account allows you to submit those jobs, executing the applications in parallel on the cluster and charging their resource use against the account. Duo is required to help authenticate you to the cluster.

GREAT LAKES USER ACCOUNT

If you already have a Flux user account, you don’t need to do anything to obtain a Great Lakes user account.  Otherwise, go to the Flux user account application page at: https://arc-ts.umich.edu/fluxform/ .

Please note that obtaining a user account requires human processing, so be sure to do this at least two business days before class begins.

GREAT LAKES JOB ACCOUNT

We create a job account for the workshop so you can run jobs on the cluster during the workshop and for one day after for those who would like additional practice. The workshop job account is quite limited and is intended only to run examples to help you cement the details of job submission and management. If you already have an existing Great Lakes job account, you can use that, though if there are any issues with that job account, we will ask you to use the workshop job account.

DUO AUTHENTICATION

Duo two-factor authentication is required to log in to the cluster. When logging in, you will need to type your UMICH (AKA Level 1) password as well as authenticate through Duo in order to access Great Lakes.

If you need to enroll in Duo, follow the instructions at Enroll a Smartphone or Tablet in Duo.

Please enroll in Duo before you come to class.

LAPTOP PREPARATION

You do not need to bring your own laptop to class. The classroom contains Windows or Mac computers, which require your uniqname and UMICH (AKA Level 1) password to login, and that have all necessary software pre-loaded.

If you want to use a laptop for the course, you are welcome to do so:  please see our web page on Preparing your laptop to use Flux. However, if there are problems connecting your laptop, you will be asked to switch to the provided computer for the class. We cannot stop to debug connection issues with personal or departmental laptops during the class.

If you are unable to attend the presentation in person we will be offering a link into the live course via BlueJeans. Please register as if attending in person.  This will put you on the wait list but we will get your account setup for remote attendance.

U-M selects Dell EMC, Mellanox and DDN to Supply New “Great Lakes” Computing Cluster

By | Flux, General Interest, Happenings, HPC, News

The University of Michigan has selected Dell EMC as lead vendor to supply its new $4.8 million Great Lakes computing cluster, which will serve researchers across campus. Mellanox Technologies will provide networking solutions, and DDN will supply storage hardware.

Great Lakes will be available to the campus community in the first half of 2019, and over time will replace the Flux supercomputer, which serves more than 2,500 active users at U-M for research ranging from aerospace engineering simulations and molecular dynamics modeling to genomics and cell biology to machine learning and artificial intelligence.

Great Lakes will be the first cluster in the world to use the Mellanox HDR 200 gigabit per second InfiniBand networking solution, enabling faster data transfer speeds and increased application performance.

“High-performance research computing is a critical component of the rich computing ecosystem that supports the university’s core mission,” said Ravi Pendse, U-M’s vice president for information technology and chief information officer. “With Great Lakes, researchers in emerging fields like machine learning and precision health will have access to a higher level of computational power. We’re thrilled to be working with Dell EMC, Mellanox, and DDN; the end result will be improved performance, flexibility, and reliability for U-M researchers.”

“Dell EMC is thrilled to collaborate with the University of Michigan and our technology partners to bring this innovative and powerful system to such a strong community of researchers,” said Thierry Pellegrino, vice president, Dell EMC High Performance Computing. “This Great Lakes cluster will offer an exceptional boost in performance, throughput and response to reduce the time needed for U-M researches to make the next big discovery in a range of disciplines from artificial intelligence to genomics and bioscience.”

The main components of the new cluster are:

  • Dell EMC PowerEdge C6420 compute nodes, PowerEdge R640 high memory nodes, and PowerEdge R740 GPU nodes
  • Mellanox HDR 200Gb/s InfiniBand ConnectX-6 adapters, Quantum switches and LinkX cables, and InfiniBand gateway platforms
  • DDN GRIDScaler® 14KX® and 100 TB of usable IME® (Infinite Memory Engine) memory

“HDR 200G InfiniBand provides the highest data speed and smart In-Network Computing acceleration engines, delivering HPC and AI applications with the best performance, scalability and efficiency,” said Gilad Shainer, vice president of marketing at Mellanox Technologies. “We are excited to collaborate with the University of Michigan, Dell EMC and DataDirect Networks, in building a leading HDR 200G InfiniBand-based supercomputer, serving the growing demands of U-M researchers.”

“DDN has a long history of working with Dell EMC and Mellanox to deliver optimized solutions for our customers. We are happy to be a part of the new Great Lakes cluster, supporting its mission of advanced research and computing. Partnering with forward-looking thought leaders as these is always enlightening and enriching,” said Dr. James Coomer, SVP Product Marketing and Benchmarks at DDN.

Great Lakes will provide significant improvement in computing performance over Flux. For example, each compute node will have more cores, higher maximum speed capabilities, and increased memory. The cluster will also have improved internet connectivity and file system performance, as well as NVIDIA Tensor GPU cores, which are very powerful for machine learning compared to prior generations of GPUs.

“Users of Great Lakes will have access to more cores, faster cores, faster memory, faster storage, and a more balanced network,” said Brock Palen, Director of Advanced Research Computing – Technology Services (ARC-TS).

The Flux cluster was created approximately 8 years ago, although many of the individual nodes have been added since then. Great Lakes represents an architectural overhaul that will result in better performance and efficiency. Based on extensive input from faculty and other stakeholders across campus, the new Great Lakes cluster will be designed to deliver similar services and capabilities as Flux, including the ability to accommodate faculty purchases of hardware, access to GPUs and large-memory nodes, and improved support for emerging uses such as machine learning and genomics.

ARC-TS will operate and maintain the cluster once it is built. Allocations of computing resources through ARC-TS include access to hundreds of software titles, as well as support and consulting from professional staff with decades of combined experience in research computing.

Updates on the progress of Great Lakes will be available at https://arc-ts.umich.edu/greatlakes/.

CSCAR provides walk-in support for new Flux users

By | Data, Educational, Flux, General Interest, HPC, News

CSCAR now provides walk-in support during business hours for students, faculty, and staff seeking assistance in getting started with the Flux computing environment.  CSCAR consultants can walk a researcher through the steps of applying for a Flux account, installing and configuring a terminal client, connecting to Flux, basic SSH and Unix command line, and obtaining or accessing allocations.  

In addition to walk-in support, CSCAR has several staff consultants with expertise in advanced and high performance computing who can work with clients on a variety of topics such as installing, optimizing, and profiling code.  

Support via email is also provided via hpc-support@umich.edu.  

CSCAR is located in room 3550 of the Rackham Building (915 E. Washington St.). Walk-in hours are from 9 a.m. – 5 p.m., Monday through Friday, except for noon – 1 p.m. on Tuesdays.

See the CSCAR web site (cscar.research.umich.edu) for more information.

ARC-TS Town Hall on Next Generation HPC Cluster

By |

The University of Michigan is beginning the process of building our next generation HPC platform, “Big House.”  Flux, the shared HPC cluster, has reached the end of its useful life. Flux has served us well for more than five years, but as we move forward with replacement, we want to make sure we’re meeting the needs of the research community.

ARC-TS will be holding a series of town halls to take input from faculty and researchers on the next HPC platform to be built by the University.  These town halls are open to anyone.

Your input will help to ensure that U-M is on course for providing HPC, so we hope you will make time to attend one of these sessions. If you cannot attend, please email hpc-support@umich.edu with any input you want to share.

ARC-TS Town Hall on Next Generation HPC Cluster

By |

The University of Michigan is beginning the process of building our next generation HPC platform, “Big House.”  Flux, the shared HPC cluster, has reached the end of its useful life. Flux has served us well for more than five years, but as we move forward with replacement, we want to make sure we’re meeting the needs of the research community.

ARC-TS will be holding a series of town halls to take input from faculty and researchers on the next HPC platform to be built by the University.  These town halls are open to anyone.

Your input will help to ensure that U-M is on course for providing HPC, so we hope you will make time to attend one of these sessions. If you cannot attend, please email hpc-support@umich.edu with any input you want to share.

ARC-TS Town Hall on Next Generation HPC Cluster

By |

The University of Michigan is beginning the process of building our next generation HPC platform, “Big House.”  Flux, the shared HPC cluster, has reached the end of its useful life. Flux has served us well for more than five years, but as we move forward with replacement, we want to make sure we’re meeting the needs of the research community.

ARC-TS will be holding a series of town halls to take input from faculty and researchers on the next HPC platform to be built by the University.  These town halls are open to anyone.

Your input will help to ensure that U-M is on course for providing HPC, so we hope you will make time to attend one of these sessions. If you cannot attend, please email hpc-support@umich.edu with any input you want to share.

ARC-TS Town Hall on Next Generation HPC Cluster

By |

The University of Michigan is beginning the process of building our next generation HPC platform, “Big House.”  Flux, the shared HPC cluster, has reached the end of its useful life. Flux has served us well for more than five years, but as we move forward with replacement, we want to make sure we’re meeting the needs of the research community.

ARC-TS will be holding a series of town halls to take input from faculty and researchers on the next HPC platform to be built by the University.  These town halls are open to anyone.

Your input will help to ensure that U-M is on course for providing HPC, so we hope you will make time to attend one of these sessions. If you cannot attend, please email hpc-support@umich.edu with any input you want to share.

ARC-TS seeks input on next generation HPC cluster

By | Events, Flux, General Interest, Happenings, HPC, News

The University of Michigan is beginning the process of building our next generation HPC platform, “Big House.”  Flux, the shared HPC cluster, has reached the end of its useful life. Flux has served us well for more than five years, but as we move forward with replacement, we want to make sure we’re meeting the needs of the research community.

ARC-TS will be holding a series of town halls to take input from faculty and researchers on the next HPC platform to be built by the University.  These town halls are open to anyone and will be held at:

  • College of Engineering, Johnson Room, Tuesday, June 20th, 9:00a – 10:00a
  • NCRC Bldg 300, Room 376, Wednesday, June 21st, 11:00a – 12:00p
  • LSA #2001, Tuesday, June 27th, 10:00a – 11:00a
  • 3114 Med Sci I, Wednesday, June 28th, 2:00p – 3:00p

Your input will help to ensure that U-M is on course for providing HPC, so we hope you will make time to attend one of these sessions. If you cannot attend, please email hpc-support@umich.edu with any input you want to share.

Video, slides available: “Advanced Research Computing at Michigan, An Overview,” Brock Palen, ARC-TS

By | General Interest, News

Video (http://myumi.ch/aAG7x) and slides (http://myumi.ch/aV7kz) are now available from Advanced Research Computing – Technology Services (ARC-TS) Associate Director Brock Palen’s presentation “Advanced Research Computing at Michigan, An Overview.”

Palen gave the talk on June 27, 2016, outlining the resources and services available from ARC-TS as well as from off-campus resource providers.