COMP/CS 605 provides experience in designing and running parallel programs in a modern academic cluster setting. One goal is to make our way through as many parallel programming exercises as time will allow in an attempt to tackle computationally intense problems from both the CPU and GPU side, and to benchmark the results. This is parallel computing by doing, primarily in a Linux cluster environment, using C and Fortran programming languages. A broad goal is to provide students with tools and expertise that will help further the computational aspect of their research efforts.
Translating a mathematical description of a problem into a computer program description is a prerequisite skill for the course, as are: fluency with the objects, language, and methods of linear algebra & undergraduate calculus, program development (using line editors such as vi, and makefiles) in the Unix command environment, and writing C-Language computer programs. (Note: basic programming is not taught in this class but see COMP 526, where Unix/Linux command line environment is also taught). Here it is assumed that the student is an accomplished C programmer who is able to expertly navigate the Unix command line interface.
COMP/CS 605 is hands-on, with weekly and semi-weekly programming assignments.
The course will consist of the following modules, based on Pacheco's 2011 book, and through programming GPU devices using CUDA. These modules consist of:
- Introduction to Parallel Computing &
SciComp Basics: Unix, performance, benchmarking, analysis, resource management
- Distributed Computing with Message Passing Interface
- Shared-Memory Programming with Pthreads and OpenMP
- Cuda Programming
SyllabusEach of these modules will have 1-2 homework assignments and an in-class exam. See
Homework and
Course Policies for more details.