Parallel Computing

Year
1
Academic year
2023-2024
Code
02003476
Subject Area
Physics
Language of Instruction
Portuguese
Other Languages of Instruction
English
Mode of Delivery
Face-to-face
Duration
SEMESTRIAL
ECTS Credits
6.0
Type
Elective
Level
2nd Cycle Studies - Mestrado

Recommended Prerequisites

Porgramming experience with high level language (Fortran or C).

Teaching Methods

Classes are essentially hands-on practice sessions of parallel computing.
Teaching involves slide presentations of theoretical material, programming examples, and programming exercises . Internet access is used for obtaining relevant material.

The programming exercises are made using a command line terminal in Windows/Linux/MacOS, GNU compilers and the mpich implementation of MPI. Access to a remote computer cluster is given for the problem assignments for evaluation.

Learning Outcomes

Objectives:
Know the architecture of parallel machines;
Know how to distribute, in selected problems, a computational task to several (as much as possible) independent processes;
Understand when to use the different parallel programing paradigms.

Competences:
Develop analysis and synthesis abilities;
Problem solving;
Usage of internet as communication means and source of information;
Decision-making capability;
Critical reasoning;
Capacity for autonomous learning;
Adaptability to new situations;
Research ability.

Work Placement(s)

No

Syllabus

Basic notions: parallel computing and its importance. Main application domains. Paradigms of parallel computing: shared and distributed memory. What is a supercomputer: main types of hardware architectures, components and middleware. Trends in supercomputing.

Parallelization in software: OpenMP, MPI . Measuring the efficiency of parallel algorithms: speedup and Amdhal's law.


OpenMP programming: fork and join model. Parallel zone. Parallel loops, collective operations and barriers. Private and shared variables. Data race problems.

MPI. Parallelization techniques: data decomposition and domain decomposition. Model master-slave for data distribution and collection. MPI communication types. Collective operations for data and computation. Communicators and communication topologies. Creation of derived data types.
Applications to linear algebra problems and to the numerical solution of the Poisson equation.

Head Lecturer(s)

Helmut Wolters

Assessment Methods

Assessment
Project: 50.0%
Resolution Problems: 50.0%

Bibliography

B. Chapman, G. Jost, R. van der Pas, Using OpenMP: Portable Shared Memory Parallel Programming, MIT Press, 2007.

W. Gropp, E. Lusk, A. Skjellum, Using MPI Portable Parallel Programming with the Message Passing Interface, segunda edição, MIT Press, 1999.

P. Pacheco, Parallel Programming with MPI, Morgan Kaufmann Publishers, 1997.

http://www.openmp.org

https://computing.llnl.gov/tutorials/mpi