Scientific Computing for Engineers: Spring 2008 – 3 Credits
This class is part of the Interdisciplinary Graduate Minor in Computational Science. See IGMCS for details.
Room C233 (NOTE: This is a room change)
Office hours: Wednesday , or by appointment
TA: Gwang Son, email@example.com
TA’s Office : Claxton 349; Phone: 974-3760.
TA’s Office Hours: Wednesday’s 11:00 – 1:00 or by appointment
There will be four major aspects of the course:
· Part I will start with current trends in high-end computing systems and environments, and continue with a practical short description on parallel programming with MPI, OpenMP, and pthreads.
· Part II will illustrate the modeling of problems from physics and engineering in terms of partial differential equations (PDEs), and their numerical discretization using finite difference, finite element, and spectral approximation.
The grade would be based on homework, a midterm project, a final project, and a final project presentation. Topics for the final project would be flexible according to the student's major area of research.
And the course mailing list: firstname.lastname@example.org
Book for the Class:
The Sourcebook of Parallel Computing, Edited by Jack Dongarra, Ian Foster, Geoffrey Fox, William Gropp, Ken Kennedy, Linda Torczon, Andy White, October 2002, 760 pages, ISBN 1-55860-871-0, Morgan Kaufmann Publishers.
Lecture Notes: (Tentative outline of the class)
Read Chapter 1, 2, and 9
Homework 1 (due January 23, 2008)
Read Chapter 3
Homework 2 (due February 6, 2008)
Notes on booting over the network
Read Chapter 11
Homework 3 (due February 18, 2008)
Homework 4 (due March 5, 2008)
Read Chapter 3
Toward an Optimal Algorithm for Matrix Multiply
Read Chapter 20,
Homework 5 (due March 12, 2008)
Read Chapter 20
Homework 6 (due March 26, 2008)
Matlab Script myqr_it.m
Homework 7 (due April 2, 2008)
Read Chapter 15
March 19 – Spring Break
Read Chapter 14 pp 409 – 442
12. April 2 – (Dr. Tomov)
Read Chapter 20 and 21
Read Chapter 21
Class Final reports
Order of presentation:
1. Wes Kendall Matching patterns with climate data
2. Rick Weber Optimizing one of the components of MADNESS
3. Gwang Son Google
4. Roland Schulz FFTs 3d w/2d decomposition
5. Dilip Patlolla Strassen Matrix Multiply on the GPU. and also the implementation of Strassen tuned for memory hierarchy
6. Benjamin Lindner Biological Crystallography
7. Junkyu Lee Lanczos method for symmetric eigenvalue problems
8. Yinan Li GridSolve request sequencing
9. Rajib Kumar Nath Loop transformation
10. Supriya Kilambi Conjugate gradient method on the GPU's
11. Bruce Johnson Radiosity’s calculation by GPUs
12. Akila Gothandaraman Implement Parallel Quantum Monte Carlo software application and study its performance
13. Reuben Budiardja Implementation of FFT based Poisson Solver for Self-Gravitating System on 3D Mesh
· Projects reports to be turned in on Tuesday, April 29th.
· Here are some ideas for projects:
Message Passing Systems.
PVM home page.
Other useful reference material
· Here’s a pointer to specs on various processors:
A good introduction to message passing systems.
``Message Passing Interfaces'', Special issue of Parallel Computing , vol 20(4), April 1994.
A paper by members of the PVM team on the differences between PVM and MPI.
Geist, G.A, J.A. Kohl, P.M. Papadopoulos, `` PVM and MPI: A Comparison of Features '', Calculateurs Paralleles , 8(2), pp. 137--150, June, 1996.
Papers by members of the MPI team on the differences between PVM and MPI.
``Why are PVM and MPI So Different'', William Gropp and Ewing Lusk (submitted to The Fourth European PVM - MPI Users' Group Meeting)
``PVM and MPI are completely different'', William Gropp and Ewing Lusk, to appear in the journal Future Generation Computer Systems, 1998.
Ian Foster, Designing and Building Parallel Programs, see http://www-unix.mcs.anl.gov/dbpp/
Alice Koniges, ed.,
Industrial Strength Parallel Computing, ISBN1-55860-540-1,
Morgan Kaufmann Publishers,
Michael Quinn, Parallel Programming, see http://web.engr.oregonstate.edu/~quinn/Comparison.htm
David E. Culler & Jaswinder Pal Singh, Parallel Computer Architecture, see http://www.cs.berkeley.edu/%7Eculler/book.alpha/index.html
George Almasi and Allan Gottlieb, Highly Parallel Computing
Standard Books on Message Passing
``MPI - The Complete Reference, Volume
1, The MPI-1 Core, Second Edition'',
The Complete Reference - 2nd Edition: Volume 2 - The MPI-2 Extensions'',
On-line Documentation and Information about Machines
Other Parallel Information Sites
Related On-line Textbooks
· Templates for the
Solution of Linear Systems: Building Blocks for Iterative Methods,
· PVM - A Users'
Guide and Tutorial for Networked Parallel Computing, MIT Press,
· MPI : A Message-Passing Interface Standard
· LAPACK Users' Guide
· MPI: The
Complete Reference, MIT Press,
· Using MPI: Portable Parallel Programming with the Message-Passing Interface by W. Gropp, E. Lusk, and A. Skjellum
· Parallel Computing Works, by G. Fox, R. Williams, and P. Messina (Morgan Kaufmann Publishers)
· Designing and Building Parallel Programs. A dead-tree version of this book is available by Addison-Wesley.
· High Performance
a course offered by
For performance analysis:
· Raj Jain, The Art of Computer Systems Performance Analysis. John Wiley, 1991.
Papers on performance analysis tools:
· Ruth A. Aydt, "The Pablo Self-Defining Data Format," November 1997, click here.
· Jeffrey K. Hollingsworth, Barton P. Miller, Marcelo J. R. Gongalves, Oscar Naim, Zhichen Xu and Ling Zheng, "MDL: A Language and Compiler for Dynamic Program Instrumentation", International Conference on Parallel Architectures and Compilation Techniques, San Francisco, CA, November 1997, click here.
· Barton P. Miller, Mark D. Callaghan, Jonathan M. Cargille, Jeffrey K. Hollingsworth, R. Bruce Irvin, Karen L. Karavanic, Krishna Kunchithapadam and Tia Newhall. "The Paradyn Parallel Performance Measurement Tools", IEEE Computer 28(11), (November 1995). click here.
· Jerry Yan and Sekhar Sarukkai and Pankaj Mehra, "Performance Measurement, Visualization and Modeling of Parallel and Distributed Programs using the AIMS toolkit", Software Practice and Experience 25(4), April 1995, 429--461
Other Online Software and Documentation
· Matlab documentation is available from several sources, most notably by typing ``help'' into the Matlab command window. A primer (for version 4.0/4.1 of Matlab, not too different from the current version) is available in either postscript or pdf.
· Netlib, a repository of numerical software and related documentation
· Netlib Search Facility, a way to search for the software on Netlib that you need
· GAMS - Guide to Available Math Software, another search facility to find numerical software
· Linear Algebra Software Libraries and Collections
· LAPACK, state-of-the-art software for dense numerical linear algebra on workstations and shared-memory parallel computers. Written in Fortran.
· ScaLAPACK, a partial version of LAPACK for distributed-memory parallel computers.
· SuperLU is a fast implementations of sparse Gaussian elimination for sequential and parallel computers, respectively.
· Sources of test matrices for sparse matrix algorithms
· Templates for the solution of linear systems, a collection of iterative methods, with advice on which ones to use. The web site includes on-line versions of the book (in html and postscript) as well as software.
· Templates for the Solution of Algebraic Eigenvalue Problems is a survey of algorithms and software for solving eigenvalue problems. The web site points to an html version of the book, as well as software.
· MGNet is a repository for information and software for Multigrid and Domain Decomposition methods, which are widely used methods for solving linear systems arising from PDEs.
· Resources for Parallel and High Performance Computing
· Millennium a UC Berkeley campus-wide parallel computing resource
· ACTS (Advanced CompuTational Software) is a set of software tools that make it easier for programmers to write high performance scientific applications for parallel computers.
· NHSE - National High Performance Computing and Communications Software Exchange, pointers to related work across the country.
· Issues related to Computer Arithmetic and Error Analysis
· Efficient software for very high precision floating point arithmetic
· The IEEE floating point standard is currently being updated. To find out what issues the standard committee is considering, look here.