High Performance Parallel Computing
NFYK18001U - SCIENCE
Passed: 97%, Average grade: 9.31, Median grade: 10
Description
Computational methods are growing increasingly important in many areas of science, and the solution to many problems depend on computers that are vastly faster and holds more memory than what a single high-end server can offer. Top supercomputers consist of more than hundred million processor cores working in parallel, and soon a top machine will host more than a billion processors. Programming such highly parallel computers is difficult, and ensuring both program correctness and high performance is non-trivial. In this class students will learn how computers are build, from the individual CPU and up to millions of processor cores. Students will learn to program high performance applications with both accelerators (GPUs), shared memory and explicit message passing paradigms. The theory is put in to practice through hands-on exercises, and students will learn to map algorithms to parallel architectures and how to decompose problems for parallel execution. We will use the ERDA MODI cluster to execute the programs on a real high performance computing infrastructure and evaluate both performance, scalability and correctness of the programs. The hands-on exercises use examples from astrophysics, biophysics, geophysics, high-energy physics, and solid-state physics. The numerical methods for each week are chosen to be well-suited to each parallel architecture. During the exercises we will each week introduce a new tool, such as debuggers, profilers, and parallel correctness checkers, to aid in the development of high performing programs.
We will use C++ as the course language, and small python programs for visualizing the data. The first week is dedicated to introducing C++ and general programming.
Knowledge:
The students will understand the challenges in addressing parallelization of applications and limitations of the available hardware. In addition, the students should have the ability to reason about the potential from different solutions to a given high performance computing problem.
Skills:
At the course completion, the student should be able to:
- Design and implement parallel applications
- Choose a parallel computer architecture for a specific purpose
- Program efficiently for shared memory architectures
- Program distributed memory architectures with Message Passing Interface
- Transform algorithms to enable vectorization of operations, well suited for accelerators
Competences:
The overall purpose of this course is to enable the student to write high performance parallel applications on a range of parallel computer architectures. In addition, the successful candidate will become familiar with a number of typical parallel computer architectures and a set of high performance scientific algorithms.
Recommended qualifications
It is an advantage to have experience writing programs, especially applications in scientific modelling, simulation or data-processing. It is useful if the student has a general idea of the internal construction of a computer.Academic qualifications equivalent to a BSc degree is recommended.
Coordinators
Troels Haugbølle
haugboel@nbi.ku.dk
Markus Jochum
mjochum@nbi.ku.dk
Exam
Continuous Assessment
Course Info
Department(s)
- Niels Bohr Institute
Workload
Lectures | 28h |
Preparation | 157h |
Practical Exercises | 21h |
Total: 206h