COMP/CS 605: Introduction to Parallel Computing (Spring, 2017)

Class Location: NE 173, T/Th 11:00 - 12:15 pm.

 


[Return to Mary Thomas's Website]

Course Links:

Useful Links:

Course Description:

COMP/CS 605 provides experience in designing and running parallel programs in a modern academic cluster setting. One goal is to make our way through as many parallel programming exercises as time will allow in an attempt to tackle computationally intense problems from both the CPU and GPU side, and to benchmark the results. This is parallel computing by doing, primarily in a Linux cluster environment, using C and Fortran programming languages. A broad goal is to provide students with tools and expertise that will help further the computational aspect of their research efforts.

Translating a mathematical description of a problem into a computer program description is a prerequisite skill for the course, as are: fluency with the objects, language, and methods of linear algebra & undergraduate calculus, program development (using line editors such as vi, and makefiles) in the Unix command environment, and writing C-Language computer programs. (Note: basic programming is not taught in this class but see COMP 526, where Unix/Linux command line environment is also taught). Here it is assumed that the student is an accomplished C programmer who is able to expertly navigate the Unix command line interface.

COMP/CS 605 is hands-on, with weekly and semi-weekly programming assignments.

The course will consist of the following modules, based on Pacheco's 2011 book, and through programming GPU devices using CUDA. These modules consist of:

  1. Introduction to Parallel Computing &
    SciComp Basics: Unix, performance, benchmarking, analysis, resource management
  2. Distributed Computing with Message Passing Interface
  3. Shared-Memory Programming with Pthreads and OpenMP
  4. Cuda Programming
SyllabusEach of these modules will have 1-2 homework assignments and an in-class exam. See Homework and Course Policies for more details.

Course Prerequisites:

The ability to program well in C is a requirement for the class. In particular, the NVIDIA CUDA compiler for GPU programming is a C compiler (with extensions). C language or Fortran 90 are supported for use in programming MPI. Familiarity with navigating, working with files, and compiling programs in a Unix or Linux environment is also assumed. Mathematical knowledge at the level of performing matrix operations in linear algebra and derivatives and (multiple) integrals in calculus is required. The course will utilize C (required) and Fortran 90 (optional) programming languages along with MPI and CUDA extensions to emphasize key parallel programming concepts. C language is required.

Also: Knowledge of the FORTRAN or C programing languages. Computer Science 501, 520, 0525; Computational Science 526, or equivalent Unix OS experience is helpful.

Recommended Textbooks:


© 2017, Mary Thomas - All rights reserved.
OpenContent license defines the copyright on this document.

Counter for tumblr
Website Stats