Keynotes

Keynote addresses for EuroMPI 2016

Jonathan Dursi, Ontario Institute for Cancer Research: “How Can MPI Fit Into Today’s Big Computing?”

10:30am, Wed 28th Sept

Jonathan Dursi

For years, the academic science and engineering community was almost alone in pursuing very large-scale numerical computing, and MPI was the lingua franca for such work. But starting in the mid-2000s, we were no longer alone.  First internet-scale companies like Google and Yahoo! started performing fairly basic analytics tasks at enormous scale, and since then others have begun tackling increasingly complex and data-heavy machine-learning computations, which involve very familiar scientific computing primitives such as linear algebra, unstructured mesh decomposition, and numerical optimization.  These new communities have created programming environments which emphasize what we’ve learned about computer science and programmability since 1994 – with greater levels of abstraction and encapsulation, separating high-level computation from the low-level implementation details.

At about the same time, new academic research communities began using computing at scale to attack their problems - but in many cases, an ideal distributed-memory application for them begins to look more like the new concurrent distributed databases than a large CFD simulation, with data structures like dynamic hash tables and Bloom trees playing more important roles than rectangular arrays or unstructured meshes.  These new academic communities are among the first to adopt emerging big-data technologies over traditional HPC options; but as big-data technologies improve their tightly-coupled number-crunching capabilities, they are unlikely to be the last.

In this talk, I sketch out the landscape of distributed technical computing frameworks and environments, and look to see where MPI and the MPI community fits in to this new ecosystem.

Jonathan Dursi has spent over twenty-five years using large scale scientific computing to advance science and R&D across a range of disciplines. He received his Ph.D. in astrophysics from the University of Chicago in 2004, doing very large-scale supernova simulations with the DOE ASCI ASAP program at the Flash Centre, and since then has been a senior research associate at the Canadian Institute for Theoretical Astrophysics; an analyst for SciNet, Canada’s largest HPC centre; then served a secondment as the first, interim, CTO for Compute Canada; and in 2015 moved into cancer bioinformatics at the Ontario Institute for Cancer Research, where he is a Scientific Associate and Software Engineer at the Ontario Institute for Cancer Research.  At OICR, he currently works with both very large traditional genomics data sets for the international TCGA+ICGC Pancancer consortium, and still manages to do sneak in some number-crunching by way of a new genome sequencing device, the Oxford Nanopore Technologies MinION, which generates floating-point signal strength data as a strand of DNA passes through a nanopore.

_______________________________________________

Bill Gropp, NCSA, University of Illinois at Urbana-Champaign: “MPI: The Once and Future King"

10:00am, Mon 26th Sept

Bill Gropp

The Message Passing Interface (MPI) has been the dominant programming system for expressing highly-parallel technical applications for over 20 years. This success was due in part to a combination of careful design of the standard by the MPI Forum and the a good match between the message-passing programming model and the distributed memory parallel computers that exploited commodity processors. However, the end of Dennard scaling and the looming end of Moore's "law" is causing major changes in computer architecture as well as creating a new community of parallel computer programmers. Will MPI continue to be relevant, or will some new programming system replace it? This talk will review the reasons for MPI's success, including the addition of new features in MPI-2, MPI-3, and planned for MPI-4, and argue why MPI will continue to be the parallel programming system for highly scalable applications.

William Gropp is the Thomas M. Siebel Chair in the Department of Computer Science, Director of the Parallel Computing Institute, and chief scientist of the National Center for Supercomputing Applications at the University of Illinois in Urbana-Champaign.  After receiving his Ph.D. in Computer Science from Stanford University in 1982, he held the positions of assistant (1982-1988) and associate (1988-1990) professor in the Computer Science Department of Yale University.  In 1990, he joined the Numerical Analysis group at Argonne, where he held the positions of Senior Scientist (1998-2007) and Associate Division Director (2000-2006).  He joined Illinois in 2007.  His research interests are in parallel computing, software for scientific computing, and numerical methods for partial differential equations. He is a co-author of "Using MPI: Portable Parallel Programming with the Message-Passing Interface", and is a chapter author in the MPI Forum.  His current projects include Blue Waters, an extreme scale computing system, and the development of new programming systems and numerical algorithms for scalable scientific computing.  He is a Fellow of ACM, IEEE, and SIAM and a member of the National Academy of Engineering.

_______________________________________________

Kathryn Mohror, Lawrence Livermore National Laboratory: “Getting Insider Information via the New MPI Tools Information Interface"

Kathryn Mohror

14:00am, Mon 26th Sept

MPI 3.0 introduced a new interface for MPI support tools called the MPI Tools Information Interface. With this interface, for the first time, tools can access MPI internal performance and configuration information. In combination with the complementary and widely used profiling interface, PMPI, the new tools interface gives tools access to a wide range of information in an MPI implementation independent way. In this talk, I will give an overview of the new interface and give its current status in terms of MPI implementation and new tools that support and utilize it. Then, I'll finish by discussing how the MPI Tools Working Group is working to extend and improve on the interface to better help tools and users.

Kathryn Mohror is a computer scientist on the Scalability Team at the Center for Applied Scientific Computing at Lawrence Livermore National Laboratory (LLNL). Kathryn’s research on high-end computing systems is currently focused on scalable fault tolerant computing and I/O for extreme scale systems. Her other research interests include scalable performance analysis and tuning, and parallel programming paradigms. Kathryn has been working at LLNL since 2010.

Kathryn’s current research focuses primarily on the Scalable Checkpoint/Restart Library (SCR), a multilevel checkpointing library that has been shown to significantly reduce checkpointing overhead. She also leads the Tools Working Group for the MPI Forum. Kathryn received her Ph.D. in Computer Science in 2010, an M.S. in Computer Science in 2004, and a B.S. in Chemistry in 1999 from Portland State University (PSU) in Portland, OR.

_______________________________________________

David Lecomber, Allinea Software: “HPC’s a-changing, so what happens to everything we know?"

9:30am, Wed 28th Sept

David Lecomber

The world of HPC is changing faster than it has changed for a long time. HPC has a wider reach than ever. Systems address more varied workloads and the software that runs is increasingly varied. Access to machines is more widespread - high-performance workstations, national, institutional or departmental systems and cloud all contribute. Within a few short years the boundaries will shift - between storage and memory, between cloud and supercomputer, between simulation and data analysis.

With so much in flux - what does that mean and what can be done for HPC insiders to thrive in the new world?

Dr David Lecomber is the CEO and a founder of Allinea Software. Early exposure to parallel computing in the form of OCCAM and BSP sparked an interest that led to a DPhil in parallel computing - and he has now been involved in parallel and high performance computing for over two decades. In 2002 he co-founded Allinea Software to create tools for the forthcoming parallel era - and those tools for performance optimization and debugging are widely used today by developers and users of HPC software around the world.

_______________________________________________

Nicole Hemsoth, The Next Platform: “The State of MPI in 2016: Challenges, Opportunities and Questions"

9:30am, Tue 27th Sept

Nicole Hemsoth

This talk will present the synthesis of research from within the HPC and MPI community about what challenges remain for MPI and what future opportunities and directions are ahead. Background will be presented that summarizes and analyzes MPI community member issues to produce a big picture view of where MPI stands from both an end user and developer perspective. Among some of the topics that will be discussed are how MPI will respond to trends on the processor front (in terms of scheduling, threading, etc.), parallel I/O challenges, responding to dynamic resource capabilities, and low versus high level programming paradigms and MPI’s role within. The goal is to provide an objective, community-based response to foster discussion and create a document for the MPI community as it moves ahead to a future in which massive core counts and increased system complexity create further challenges.

Nicole Hemsoth is co-founder and co-editor of the Next Platform. Nicole brings insight from the world of high performance computing hardware and software as well as data-intensive systems and frameworks. Hemsoth is former Editor in Chief of long-standing supercomputing magazine, HPCwire. She was founding editor and conceptual creator of the data-intensive computing magazine Datanami, as well as the conceptual creator and founding Senior Editor for the large-scale infrastructure focused EnterpriseTech.

_______________________________________________

Toni Collis, EPCC, The University of Edinburgh & Women in High Performance Computing: “The Elephant in the room: the under-representation of women in the MPI community"

10:30am, Wed 28th Sept

Toni Collis

The Message Passing Interface is the de-facto standard method for communication between processes in modern parallel architectures, having dominated the field for much of the 22 years since the first standard was released in 1994. As this conference discusses MPI is now reaching a critical period where it must change and adapt to accommodate both the challenges of exascale and the diversity of new modern HPC architectures. To meet this challenge, diversity of thought and ideas is paramount, and this is best achieved by engaging a diverse pool of contributors and by ensuring that there are no bar- riers to participation in this debate. Anecdotal observation suggests that the MPI community is not as diverse as it could be, particularly with regards to the representation of women. I will discuss the our first attempt at quantifying the gender-balance of those interacting with MPI and provide a comparison with similar communities as well as the challenges we face if we wish to take up the challenge of diversifying this community.

Toni Collis is the Director and founder of the Women in HPC (WHPC) network, and an Applications Consultant in HPC Research and Industry at Edinburgh Parallel Computing Centre (EPCC), UK. Within EPCC Toni provides technical expertise on a range of research projects using HPC in academic software including providing technical assistance for users of the UK national HPC facility ARCHER. Toni also teaches on courses in the EPCC MSc in High Performance Computing. Prior to working at EPCC Toni gained a PhD in computational condensed matter as well as an MSc in HPC and an MPhys in Mathematical Physics. As WHPC Director, Toni is responsible for leading the network as it grows, running events and is also working on research into improving the diversity of the HPC community. She has been on the organising committee for a variety of workshops and conferences including leading the team for the previous WHPC workshops and BoFs.

Edinburgh Images

Last updated: 12 Sep 2016 at 22:12