Teaching — Panruo Wu

« Back

Fall 2023 COSC 3320 - Algorithms and Data Structures course link
Spring 2021 COSC 3320 - Algorithms and Data Structures course link
Fall 2020 COSC 6374 - Parallel Computations

Single thread performance improvement has been slowing down for some time now due to the increasing cost of energy consumption and heat dissipation. Computers of all sizes, from the fastest supercomputers to tackle the most challenging problems, to personal computers and mobile devices have resorted to parallelism at various levels---instruction level parallelism, multi-threading, networked clustering, and accelerators such GPUs to improve performance. Parallel computing thus are increasingly critical in making efficient use of today's computer systems, which are all parallel computers.

This course will introduce parallel computer architectures, parallel computing principles, programming models, and technologies (SIMD, MPI, OpenMP, pthreads, CUDA, MapReduce/Spark). We will also explore some important problems (sorting, graph, numerical algorithms such as matrix computations, and machine learning models and algorithms) that may benefit from parallel computing as examples. The main perspective is that of a programmer's, so that the knowledge and skills acquired in this course will be useful widely. We will focus on principles and functions, and the main mode of studying is by doing.

Have you wondered how modern microprocessors look at hundreds of instructions at one time and try to execute multiple instructions in one cycle, and how programmers can better write program that help processors do so? And how GPUs can have thousands of cores and do certain computations blazingly fast? And how a cluster of networked computers can collaborate to handle problems of enormous size? Please join the course and the journey of doing multiple things at the same time!

Fall 2019 COSC 6374 - Parallel Computations

Parallel Architectures, programming, and applications.

Spring 2019 COSC 6364 - Adv. Numerical Analysis

The objective of this course is to prepare the numerical algorithmic foundations for scientific computing and also machine/statistical learning. The two big focuses are numerical linear algebra and (mostly) convex optimizations.

This course will include the following topics:

  • Matrix factorizations: LU, QR, Cholesky
  • Eigen/singular value decompositions
  • Linear system solvers, least square problems, eigen problems: direct solvers and iterative solvers
  • Advanced numerical linear algebra: parallelizations, randomization, ...
  • Convex optimizations
  • First order optimization methods: gradient descent, subgradient method, proximal/accelerated gradient descent, stochastic gradient descent...
  • Duality, KKT conditions
  • Second order optimization methods: Newton's method, barrier method, Primal-dual interior point methods, quasi-Newton methods...
  • Advanced optimizations: ADMM, dual methods, coordinate descent, ...
  • Parallelizations of numerical linear algebra and optimizations.

The emphasis would be on practical design, analysis, and implementation of numerical algorithms on modern computers, ranging from a simple desktop or embedded system to large massively parallel computers consisting of tens of thousands of nodes in a supercomputer or datacenter.

This course will be largely self contained. The prerequisites include familiarity with college level multivariate calculus, linear algebra, probability/statistics, and script programming in Matlab or Python (or R or Julia or whatever programming environment you'd like to use).

Fall 2018 COSC 6374 - Parallel Computations

Single thread performance has stagnated for some time due to the increasing cost of energy consumption and heat dissipation. Computers of all sizes, from the fastest supercomputers to tackle the most challenging problems, to personal computers and mobile divices have resorted to multi-threading, networked clustering, and accelerators such GPUs to improve performance. Parallel computing thus are increasingly important in making efficient use of today's computer systems, which are all parallel computers.

This course starts by introducing parallel computer architectures, parallel computing principles, programming models, and technologies (MPI, OpenMP, pthreads, CUDA, MapReduce/Spark). The second part explores using parallel computing for several key computational kernels (linear algebra operations, simulations, machine/statistical learning, graphs, FFT, sorting, etc) which are workhorses for computational sciences and data sciences. The course will consist of lectures, readings, and programming projects.

Spring 2018 COSC 6374 - Parallel Computations

Single thread performance has stagnated for some time due to the increasing cost of energy consumption and heat dissipation. Computers of all sizes, from the fastest supercomputers to tackle the most challenging problems, to personal computers and mobile divices have resorted to multi-threading, networked clustering, and accelerators such GPUs to improve performance. Parallel computing thus are increasingly important in making efficient use of today's computer systems, which are all parallel computers.

This course starts by introducing parallel computer architectures, parallel computing principles, programming models, and technologies (MPI, OpenMP, pthreads, CUDA). The second part explores using parallel computing for several key computational kernels (linear algebra operations, simulations, graphs, FFT etc) which are workhorses for computational sciences and data sciences. The course will consist of lectures, readings, and programming projects.