Symposium on the occasion of the publication of Parallel Scientific Computation: A Structured Approach using BSP and MPI, by Rob H. Bisseling. Oxford University Press, March 2004.
This symposium presents recent trends in parallel computation and outlines their potential for the field of scientific computation. A common thread in the talks of the symposium is the bulk sychronous parallel (BSP) model, which provides a convenient theoretical framework for the design and analysis of parallel algorithms and which should naturally lead to well-structured parallel programs.
Everyone interested is welcome! Interested students are particularly encouraged to participate. Participation and consumptions are free.
Fifteen years after the first work on BSP computing, the first book on the subject has just appeared. In this talk I will describe some of the insights that BSP has provided over the years on how to design large scale parallel systems that can attain peak levels of performance. Extensive research and practical experience has shown that the unique superstep semantics of BSP provides a simple and very powerful model for structuring parallel computations that use Remote Direct Memory Access (RDMA) as the fundamental (one-sided) communications mechanism. As new high performance communications technologies based on RDMA over IP protocols begin to emerge over the next few years, and their use becomes widespread in high performance computing, BSP is ideally positioned to provide the structured parallel programming model that future high performance computing systems based on those communications technologies will require.
In spite of the general availability of parallel computing hardware in research labs, and in spite of a rich literature on parallel algorithms for numerical applications, only a tiny fraction of computational scientists writes parallel software. The main reason for this is the technical difficulty of writing, debugging, and maintaining parallel code. Like high-level languages simplify software development, the high-level parallel computing model BSP simplifies parallelization. Using the computational study of protein dynamics as an example, I will show how the combination of the two makes parallel programming accessible to computational scientists.
The field of scientific computation can gain tremendously from the power
of parallel computers such as Beowulf clusters or supercomputers.
Some applications are already routinely carried out on parallel computers,
but many more could follow. In this talk, I will present a picture album with
the main themes from the book "Parallel scientific computation":
designing parallel algorithms in a structured way by using the BSP model,
choosing the best data distribution by a careful complexity analysis,
and minimising and balancing communication by a variety of mathematical methods,
such as sphere packing, space tiling, graph traversing, and hypergraph
partitioning. I will discuss recent improvements of the Mondriaan
partitioning package for sparse matrices based on such methods.
(This lecture is also scheduled as part of the regular departmental
seminar series `stafcolloquium' of the Mathematical Institute.
Chair of the session: Johan van de Leur.)
By public transportation from Utrecht Central Station: Bus 11 (`halte botanische tuin') comes closest. Bus 12 (`halte Kruytgebouw') is an alternative.