Explore ARCExplore ARC

Research Computing on the Great Lakes cluster

By |

This workshop will provide a brief overview of the the new HPC environment and is intended for current Flux and Armis users.  We will use the temporary Beta HPC cluster to demonstrate how jobs will be submitted and managed under the new Great Lakes, Armis2, and Lighthouse clusters available later this year.

There are many differences between the familiar Flux environment and that of the new HPC clusters, including a new batch scheduling system, a new interactive batch job environment, a new HPC web portal, a new module environment, and a new on-demand-only job accounting system.

We will cover these differences in the workshop, and provide hands-on training in creating and running job submission scripts in the new HPC environment.  Students are expected to be conversant with the Linux command line and have experience in creating, submitting, and troubleshooting PBS batch scripts.

Introduction to the Linux Command Line

By |

This course will familiarize the student with the basics of accessing and interacting with Linux computers using the GNU/Linux operating system’s Bash shell, also generically referred to as “the command line”. Topics include: a brief overview of Linux, the Bash shell, navigating the file system, basic commands, shell redirection, permissions, processes, and the command environment. The workshop will also provide a quick introduction to nano a simple text editor that will be used in subsequent workshops to edit files.

INSTRUCTOR

Kenneth Weiss
IT Project Senior Manager
Medical School Information Services (MSIS)

Ken is a High Performance Computing Consultant in the Computational Research Consulting Division of MSIS at the University of Michigan. He works with a team of IT specialists to provide high performance computing support and training for the Medical School. Prior to this, he spent 21 years managing research computing, including an HPC cluster, for Dr. Charles Sing in the Human Genetics Department.

MATERIALS

COURSE PREPARATION

You must register at least three full days prior to the event so that we have time to insure you have proper UM credentials for the workshop. This allows enough time for you to get your account adjusted by ITS in case you do not have access to the Linux systems.

If you have questions about this workshop, please send an email to the instructor at kgweiss@umich.edu

Research Computing on the Great Lakes cluster

By |

This workshop will provide a brief overview of the the new HPC environment and is intended for current Flux and Armis users.  We will use the temporary Beta HPC cluster to demonstrate how jobs will be submitted and managed under the new Great Lakes, Armis2, and Lighthouse clusters available later this year.

There are many differences between the familiar Flux environment and that of the new HPC clusters, including a new batch scheduling system, a new interactive batch job environment, a new HPC web portal, a new module environment, and a new on-demand-only job accounting system.

We will cover these differences in the workshop, and provide hands-on training in creating and running job submission scripts in the new HPC environment.  Students are expected to be conversant with the Linux command line and have experience in creating, submitting, and troubleshooting PBS batch scripts.

Research Computing on the Great Lakes cluster

By |

This workshop will provide a brief overview of the the new HPC environment and is intended for current Flux and Armis users.  We will use the temporary Beta HPC cluster to demonstrate how jobs will be submitted and managed under the new Great Lakes, Armis2, and Lighthouse clusters available later this year.

There are many differences between the familiar Flux environment and that of the new HPC clusters, including a new batch scheduling system, a new interactive batch job environment, a new HPC web portal, a new module environment, and a new on-demand-only job accounting system.

We will cover these differences in the workshop, and provide hands-on training in creating and running job submission scripts in the new HPC environment.  Students are expected to be conversant with the Linux command line and have experience in creating, submitting, and troubleshooting PBS batch scripts.

Research Computing on the Great Lakes cluster

By |

This workshop will provide a brief overview of the the new HPC environment and is intended for current Flux and Armis users.  We will use the temporary Beta HPC cluster to demonstrate how jobs will be submitted and managed under the new Great Lakes, Armis2, and Lighthouse clusters available later this year.

There are many differences between the familiar Flux environment and that of the new HPC clusters, including a new batch scheduling system, a new interactive batch job environment, a new HPC web portal, a new module environment, and a new on-demand-only job accounting system.

We will cover these differences in the workshop, and provide hands-on training in creating and running job submission scripts in the new HPC environment.  Students are expected to be conversant with the Linux command line and have experience in creating, submitting, and troubleshooting PBS batch scripts.

Research Computing on the Great Lakes cluster

By |

This workshop will provide a brief overview of the the new HPC environment and is intended for current Flux and Armis users.  We will use the temporary Beta HPC cluster to demonstrate how jobs will be submitted and managed under the new Great Lakes, Armis2, and Lighthouse clusters available later this year.

There are many differences between the familiar Flux environment and that of the new HPC clusters, including a new batch scheduling system, a new interactive batch job environment, a new HPC web portal, a new module environment, and a new on-demand-only job accounting system.

We will cover these differences in the workshop, and provide hands-on training in creating and running job submission scripts in the new HPC environment.  Students are expected to be conversant with the Linux command line and have experience in creating, submitting, and troubleshooting PBS batch scripts.

Advanced batch computing with Slurm on the Great Lakes cluster

By |

This workshop will cover some more advanced topics in cluster computing on the U-M Great Lakes Cluster. Topics to be covered include a review of common parallel programming models and basic use of Great Lakes; dependent and array scheduling; troubleshooting and analysis; a brief introduction to workflow scripting using bash; parallel processing in one or more of Python, R, and MATLAB; and parallel profiling of C and Fortran code using Allinea Performance Reports and Allinea MAP of one or more of MPI and OpenMP programs. We will issue you a temporary Great Lakes account to use for the course, or you can use your existing Great Lakes accounts, if any.

Course Preparation (PLEASE READ)

Obtain a user login on Flux. If you do not have a Flux user login, go to the application page at: https://arc-ts.umich.edu/fluxform/

Register for Duo authentication.

This course assumes familiarity with the Linux command line as might be got from the CSCAR/ARC-TS workshop Introduction to the Linux Command Line. In particular, participants should understand how files and folders work, be able to create text files using the nano editor, be able to create and remove files and folders, and understand what input and output redirection are and how to use them.

If you are unable to attend the presentation in person we will be offering a link into the live course via BlueJeans. Please register as if attending in person.  This will put you on the wait list but we will get your account setup for remote attendance.

Introduction to the Great Lakes cluster and batch computing with Slurm

By |

OVERVIEW

This workshop will provide a brief overview of the components of the Great Lakes Cluster. The main body of the workshop will cover the resource manager and scheduler, creating submissions scripts to run jobs and the options available in them, and hands-on experience. By the end of the workshop, every participant should have created a submission script, submitted a job, tracked its progress, and collected its output. Participants will have several working examples from which to build their own submissions scripts in their own home directories.

PRE-REQUISITES

This course assumes familiarity with the Linux command line as might be got from the CSCAR/ARC-TS workshop Introduction to the Linux Command Line. In particular, participants should understand how files and folders work, be able to create text files using the nano editor, be able to create and remove files and folders, and understand what input and output redirection are and how to use them.

INSTRUCTORS

Dr. Charles J Antonelli
Research Computing Services
LSA Technology Services

Charles is a High Performance Computing Consultant in the Research Computing Services group of LSA TS at the University of Michigan, where he is responsible for high performance computing support and education, and was an Advocate to the Departments of History and Communications. Prior to this, he built a parallel data ingestion component of a novel earth science data assimilation system, a secure packet vault, and worked on the No. 5 ESS Switch at Bell Labs in the 80s. He has taught courses in operating systems, distributed file systems, C++ programming, security, and database application design.

John Thiels
Research Computing Services
LSA Technology Services

Mark Champe
Research Computing Services
LSA Technology Services

MATERIALS

COURSE PREPARATION

In order to participate successfully in the workshop exercises, you must have a user login, a Slurm account, and be enrolled in Duo. The user login allows you to log in to the cluster, create, compile, and test applications, and prepare jobs for submission. The Slurm account allows you to submit those jobs, executing the applications in parallel on the cluster and charging their resource use to the account. Duo is required to help authenticate you to the cluster.


USER LOGIN

If you already have a Flux user login, you don’t need to do anything.  Otherwise, go to the Flux user login application page at: https://arc-ts.umich.edu/fluxform/ .

Please note that obtaining a user account requires human processing, so be sure to do this at least two business days before class begins.


SLURM ACCOUNT

We create a Slurm account for the workshop so you can run jobs on the cluster during the workshop and for one day after for those who would like additional practice. The workshop job account is quite limited and is intended only to run examples to help you cement the details of job submission and management. If you already have an existing Slurm account, you can use that, though if there are any issues with that account, we will ask you to use the workshop account.


DUO AUTHENTICATION

Duo two-factor authentication is required to log in to the cluster. When logging in, you will need to type your UMICH (AKA Level 1) password as well as authenticate through Duo in order to access Great Lakes.

If you need to enroll in Duo, follow the instructions at Enroll a Smartphone or Tablet in Duo.

Please enroll in Duo before you come to class.

LAPTOP PREPARATION

You do not need to bring your own laptop to class. The classroom contains Windows or Mac computers, which require your uniqname and UMICH (AKA Level 1) password to login, and that have all necessary software pre-loaded.

If you want to use a laptop for the course, you are welcome to do so:  please see our web page on Preparing your laptop to use Flux. However, if there are problems connecting your laptop, you will be asked to switch to the provided computer for the class. We cannot stop to debug connection issues with personal or departmental laptops during the class.

If you are unable to attend the presentation in person we will be offering a link into the live course via BlueJeans. Please register as if attending in person.  This will put you on the wait list but we will get your account setup for remote attendance.

Advanced batch computing with Slurm on the Great Lakes cluster

By |

OVERVIEW

This workshop will cover some more advanced topics in cluster computing on the U-M Great Lakes Cluster. Topics to be covered include a review of common parallel programming models and basic use of Great Lakes; dependent and array scheduling; troubleshooting and analysis; a brief introduction to workflow scripting using bash; parallel processing in one or more of Python, R, and MATLAB; and parallel profiling of C and Fortran code using Allinea Performance Reports and Allinea MAP of one or more of MPI and OpenMP programs.

PRE-REQUISITES

This course assumes familiarity with the Linux command line as might be got from the CSCAR/ARC-TS workshop Introduction to the Linux Command Line. In particular, participants should understand how files and folders work, be able to create text files using the nano editor, be able to create and remove files and folders, and understand what input and output redirection are and how to use them.

INSTRUCTORS

Dr. Charles J Antonelli
Research Computing Services
LSA Technology Services

Charles is a High Performance Computing Consultant in the Research Computing Services group of LSA TS at the University of Michigan, where he is responsible for high performance computing support and education, and was an Advocate to the Departments of History and Communications. Prior to this, he built a parallel data ingestion component of a novel earth science data assimilation system, a secure packet vault, and worked on the No. 5 ESS Switch at Bell Labs in the 80s. He has taught courses in operating systems, distributed file systems, C++ programming, security, and database application design.

John Thiels
Research Computing Services
LSA Technology Services

MATERIALS

COURSE PREPARATION

In order to participate successfully in the workshop exercises, you must have a Great Lakes user account, a Great Lakes job account (one is created for each workshop), and be enrolled in Duo. The user account allows you to log in to the cluster, create, compile, and test applications, and prepare jobs for submission. The job account allows you to submit those jobs, executing the applications in parallel on the cluster and charging their resource use against the account. Duo is required to help authenticate you to the cluster.

GREAT LAKES USER ACCOUNT

If you already have a Flux user account, you don’t need to do anything to obtain a Great Lakes user account.  Otherwise, go to the Flux user account application page at: https://arc-ts.umich.edu/fluxform/ .

Please note that obtaining a user account requires human processing, so be sure to do this at least two business days before class begins.

GREAT LAKES JOB ACCOUNT

We create a job account for the workshop so you can run jobs on the cluster during the workshop and for one day after for those who would like additional practice. The workshop job account is quite limited and is intended only to run examples to help you cement the details of job submission and management. If you already have an existing Great Lakes job account, you can use that, though if there are any issues with that job account, we will ask you to use the workshop job account.

DUO AUTHENTICATION

Duo two-factor authentication is required to log in to the cluster. When logging in, you will need to type your UMICH (AKA Level 1) password as well as authenticate through Duo in order to access Great Lakes.

If you need to enroll in Duo, follow the instructions at Enroll a Smartphone or Tablet in Duo.

Please enroll in Duo before you come to class.

LAPTOP PREPARATION

You do not need to bring your own laptop to class. The classroom contains Windows or Mac computers, which require your uniqname and UMICH (AKA Level 1) password to login, and that have all necessary software pre-loaded.

If you want to use a laptop for the course, you are welcome to do so:  please see our web page on Preparing your laptop to use Flux. However, if there are problems connecting your laptop, you will be asked to switch to the provided computer for the class. We cannot stop to debug connection issues with personal or departmental laptops during the class.

If you are unable to attend the presentation in person we will be offering a link into the live course via BlueJeans. Please register as if attending in person.  This will put you on the wait list but we will get your account setup for remote attendance.