Advanced Computation
1
2018-2019
03005769
Physics
Portuguese
English
Face-to-face
SEMESTRIAL
6.0
Elective
3rd Cycle Studies
Recommended Prerequisites
Programming experience with high level language (Fortran or C).
Teaching Methods
Classes are essentially hands-on practice sessions of parallel computing.
Teaching involves slide presentations of theoretical material, programming examples, and programming exercises . Internet access is used for obtaining relevant material.
The programming exercises are made using a command line terminal in Windows/Linux/MacOS, GNU compilers and the mpich implementation of MPI. Access to a remote computer cluster is given for the problem assignments for evaluation.
Learning Outcomes
Recognize the importance and the application domains of advanced computing.
Know the main hardware and software components of a supercomputer
Acquire knowledge and practice of parallel computing, including the use of directives/libraries for parallel computing and some specific algorithms for this kind of computing.
Gain experience in using advanced computing resources.
Competences:
Develop analysis and synthesis abilities;
Problem solving;
Usage of internet as communication means and source of information;
Decision-making capability;
Critical reasoning;
Capacity for autonomous learning;
Adaptability to new situations;
Research abilities
Work Placement(s)
NoSyllabus
Introduction to advanced computing systems: HPC vs. HTC.
Hardware architectures: clusters, MPP, hybrid architectures.
System software used in HPC: filesystems, libraries, resource management and job allocation.
Trends in supercomputing.
Parallel computing and its importance. Main application domains. Paradigms of parallel computing: shared memory and distributed memory. Measuring the efficiency of parallel algorithms: speedup and Amdhal's law.
OpenMP programming: fork and join model. Parallel zone. Parallel loops, collective operations and barriers. Private and shared variables. Data race problems.
MPI. Parallelization techniques: data decomposition and domain decomposition. Model master-slave for data distribution and collection. MPI communication types. Collective operations for data and computation. Communicators and communication topologies. Creation of derived data types.
Applications to linear algebra problems and to the numerical solution of the Poisson equation.
Head Lecturer(s)
Helmut Wolters
Assessment Methods
Assessment
Project: 50.0%
Resolution Problems: 50.0%
Bibliography
Using MPI, 2nd Edition
William Gropp, Ewing Lusk and Anthony Skjellum, MIT Press
Using MPI-2
William Gropp, Ewing Lusk and Rajeev Thakur, MIT Press
Using OpenMP
Barbara Chapman, Gabriele Jost and Ruud van der Pas, MIT Press
Parallel Programming with MPI, P. Pacheco, Morgan Kaufmann Publishers, 1997.
Numerical Linear Algebra on High-Performance Computers
Jack J. Dongarra, Iain S. Duff , Danny C. Sorensen, Hank A. van der Vorst
The Sourcebook of Parallel Computing
Jack Dongarra , Geoffrey Fox , Ken Kennedy , Linda Torczon , William Gropp , Ian Foster (Editor), Andy White
(Editor)
http://www.openmp.org
https://computing.llnl.gov/tutorials/mpi