class: center, middle # An Introduction to High Performance Computing ??? Notes for the _first_ slide! --- # What is High Performance Computing? -- * Using more than your laptop or desktop (or high powered workstation) -- * Designed with various hardware architectures that fit different problems -- * Allows you to run analyses not possible with a typical computer -- * High memory, high core count, bigdata --- # Before we launch in to ISU-HPC... -- * Are you tired of typing: ``` $ ssh
@nova.its.iastate.edu ``` ???? -- * `cd` into your root directory and try: `ls -a` -- * You should see a hidden folder named `.ssh` -- * `cd` into `.ssh` and create a file named "config": ``` $ touch config ``` --- # Before we launch in to ISU-HPC... -- * Now open the file `config`, add the following contents, and save the file: ``` Host nova HostName hpc-class.its.iastate.edu User
``` -- * Now try it out! ``` $ ssh hpc-class ``` -- * And once you're in hpc-class, a few new tricks: ``` $ hostname $ whoami ``` --- # Performance Monitoring * htop (or top) * iostat * iftop
There are other tools, but these will get you the basic info you need and are installed on most systems by default --- # Terminology * HPC terminology can be confusing, sometimes the same words have different contextual meaning (e.g. threads) ## Terms * Nodes: compute node, head node -- * Processors (a cpu chip) -- * Cores (physical cpus embedded on a single chip) -- * Threads: hyper-threading (software 'trick' to let two processes share a core) -- * Scheduler: Allocates access to resources to users (ISU uses Slurm) --- #Cluster Diagram
--- # Compute Resources at ISU ## Clusters * [hpc-class on nova](http://www.hpc.iastate.edu/guides/nova/hpc-class) * For classes, not research --
--- # Compute Resources at ISU * [nova](http://www.hpc.iastate.edu/guides/nova) * Primarily for sponsored research * Hundreds of Nodes --
Also check out these [UNIX tutorials](https://www.hpc.iastate.edu/guides/unix-introduction) through ISU-HPC! --- # Compute Resources at ISU ## [Custom hardware through Pronto at Research IT](https://researchit.las.iastate.edu/guides/pronto/hardware/) * BioCrunch (for multithreaded shared memory programs) * 2.4GHz, 80 threads, 768GB of RAM -- * BigRAM (for large memory needs like de-novo assembly) * 2.6GHz, 48 threads, 1.5TB of RAM -- * Speedy (for single threaded programs, like R) * 3.4GHz, 24 threads, 366GB of RAM * For jobs that cannot be parallelized and require the fastest CPU -- * GPU (for jobs that need a GPU) -- * Legion (for massively parallel applications) * 4 nodes * 1.3GHz, 272 threads, 386GB of RAM (each) --- #Compute Resources ## [ACCESS](https://access-ci.org/) * ISU researchers may have access to the national supercomputer centers (e.g. TACC, PSC) via ACCESS * For problems that need to scale larger than our on-campus resources * Campus contacts: Jim Coyle or Andrew Severin ## Cloud (AWS, Azure, etc.) * Tempting introductory rates * Be cautious of putting in a credit card (charges can accumulate quickly) * Consider data transfer times and speed * Consult with IT before purchasing - they have special negotiated rates --- # Software ## Modules: * Modules are used to allow multiple people to use the same software with consistent, reproducible results -- * Modules keep your environment clean and free of conflicts (e.g. python2/3 or Java7/8) -- * Think of modules as a software library. You need to check out the software before you can use it. -- * Modules can be searched: -- ``` $ module avail -------------------- /opt/rit/modules --------------------------------- abyss/1.5.2 freesurfer/5.3.0 lib/htslib/1.2.1 abyss/1.9.0 gapcloser/1.12-r6 lib/htslib/1.3 afni/17.0.10 gapfiller/1-10 lib/htslib/1.3.2 albert/20170105 gatk/3.4-46 lib/ICE/1.0.9 ... ``` --- # Software ## Modules * To use a module: -- ``` $ module load
``` -- * To delete a module: -- ``` $ module unload
``` -- * Default behavior is to load the latest version if not specified -- * Use 'module purge' to clear your environment before loading something different --- # Storage -- ## Code * Git (github, bitbucket, gitlab, etc.) -- ## Datasets -- * Storage on the clusters and servers should be treated as temporary -- * Become familiar with the disk architecture on each machine -- * Keep backups --- # Job Scheduler -- * Scheduler assigns jobs from the queue to the compute nodes -- * ISU uses [SLURM](https://researchit.las.iastate.edu/slurm-basics) -- * Think about the scheduler like a hotel reservation - you're charged for the room whether you use it or not, and if you ask for an especially long or large reservation, the room may not be available when you want it. -- * Basic info required: how many nodes, how long -- * [Script writer](https://www.hpc.iastate.edu/guides/condo-2017/slurm-job-script-generator-for-condo) can get you started --- # Slurm Script Cheatsheat: --
--- ## Sample SLURM script (mysbatch.sh) -- ``` #!/bin/bash #SBATCH --time=1:00:00 # walltime #SBATCH --nodes=2 # number of nodes in this job #SBATCH --ntasks-per-node=16 # total number of processor cores in this job #SBATCH --output=myout_%J.log #SBATCH --error=myerr_%J.err module load R Rscript MyThing.R ``` -- Then submit: ``` $ sbatch mysbatch.sh ``` --- #Slurm Job Management Cheatsheat: --
--- # Common stumbling blocks -- * Over or under using resources -- * Not taking advantage of the right machine for the problem -- * Moving data through slow links -- * Trying to scale up programs that weren't designed for large datasets --- # Support * [Research IT](http://rit.las.iastate.edu) * Build software, run custom hardware, consult on performance improvements, etc. * Collaboration between LAS, CALS, IT, etc. http://rit.las.iastate.edu/people * researchit@iastate.edu * IRC (chat): #bitcom on freenode * [ISU IT HPC Team](http://hpc.iastate.edu) * Cluster Support team * hpc-help@iastate.edu