Par Lab Seminar: Towards a Science of Parallel Programming

11/19/2009 11:00 am
11/19/2009 12:30 pm

Keshav Pingali of the University of Texas at Austin will speak on Thursday, November 19 at 11am in 430 Soda Hall (the Woz).

When parallel programming started in the 70's and 80's, it was mostly art: languages such as functional and logic programming languages were designed and appreciated mainly for their elegance and beauty. More recently, parallel programming has become engineering: conventional languages like FORTRAN and C++ have been extended with parallel constructs, and we now spend our time benchmarking and tweaking large programs no one understands to obtain performance improvements of 5-10%. In spite of all this activity, we have few insights into how to write parallel programs to exploit the performance potential of multicore processors.

To break this impasse, we need a science of parallel programming. In this talk, I will argue that our problems arise from the fact that the foundational abstractions we use to reason about parallelism in programs are broken. I will then propose a new abstraction called "amorphous data-parallelism" that provides a simple and unified picture of parallelism in a host of diverse applications ranging from mesh generation/refinement/partitioning to SAT solvers, maxflow algorithms, stencil computations and event-driven simulation. Then I will present a natural classification of these kinds of algorithms that provides insight into the structure of parallelism and locality in these algorithms, and into appropriate language and systems support for exploiting this parallelism.

Keshav Pingali is the W.A. "Tex" Moncrief Chair of Computing in the Computer Sciences department at the University of Texas, Austin. He was on the faculty of the Department of Computer Science at Cornell University from 1986 to 2006, where he held the India Chair of Computer Science. Pingali's research has focused on programming languages and compiler technology for program understanding, restructuring, and optimization. His group is known for its contributions to memory-hierarchy optimization; some of these have been patented. Algorithms and tools developed by his projects are used in many commercial products such as Intel's IA-64 compiler, SGI's MIPSPro compiler, and HP's PA-RISC compiler. His current research is focused on programming languages and tools for multicore processors. He is the Editor-in-chief of the ACM Transactions on Programming Languages and Systems (TOPLAS), the Steering Committee Chair for the ACM Symposium on Principles and Practice of Parallel Programming (PPoPP), and he serves on the NSF CISE Advisory Committee (2009-2011).