4. Program‎ > ‎

HPCS2017 Panels

The International Conference on High Performance Computing & Simulation 
(HPCS 2017)

The 15th Annual Meeting

July 17 – 21, 2017
Genoa, Italy
http://hpcs2017.cisedu.info  or  http://cisedu.us/rp/hpcs17


PANEL I: Programming Models for the Exascale Era

MODERATOR:    Biagio Cosenza 
                                        (Technische Universität Berlin, Germany)

     Marco Aldinucci              (University of Turin, Italy )
     Ronald B. Brightwell     (Sandia National Laboratories, New Mexico, USA)
     Paul C. Messina               (Argonne National Labs and Exascale Project Director, Illinois, USA)
     David W. Walker             (Cardiff University, U.K.)

Exascale systems will pose several challenges. They are expected to have approximately 3-5 orders of magnitude more concurrency than current Petascale systems; however, the available memory will not scale by the same factor. Exascale platforms will be energy constrained, therefore reducing their power consumption will be a paramount concern. At the core of Exascale system we will have massively-parallel and heterogeneous processor architectures, various type of accelerators, and resilience management will become a fundamental point of the software stack. 

While there are indications about how those systems are being developed, we still do not know how we will program them. Current programming models such as MPI + FORTRAN or C, which have worked for the past decades, are likely to be inadequate for Exascale platforms. Next generation programming models will have to address aspects of scalability and portability, as well as offering a transition path for applications through interoperability and multi-language support. The models will have to help with power management. They will integrate compilers and runtime systems, provide a high-level of productivity to handle applications across current and emerging domains. Debuggers, benchmarks, and performance analysis tools as well as autotuning will be desirable given the massive amount of parallelism and details that can easily overwhelm the user.  

This panel will attempt to address the above issues and project how best to meet these future horizons with a new class of programming models and software solutions to support computing at Exascale level. 


Image result for Marco AldinucciMarco Aldinucci is professor at Computer Science Department of the University of Torino (UNITO) since 2014.  Previously, he has been compiler engineer at Quadrics ltd., postdoctoral researcher at University of Pisa and at Italian National Research Agency (ISTI-CNR). He has been visiting professor at School of Informatics of University of Edinburgh. He is the author of over 120 research papers. He has been participating in over 20 research projects concerning parallel and distributed computing. He is the recipient of the HPC Advisory Council University Award 2011, the NVidia Research award 2013, the IBM Faculty Award 2015. He leads the parallel computing group at UNITO, serves as director of the HPC lab at innovation centre of UNITO (ICxT) and as vice-president of the scientific computing competency centre at UNITO (C3S). He co-designed a number of frameworks for parallel computing, including FastFlow, which has been the background technology of the EU FP7/H2020 projects Paraphrase, Repara, Rephrase. His research is focused on parallel and distributed computing and HPC.

Ronald B. Brightwell received his BS in mathematics in 1991 and his MS in computer science in 1994 from Mississippi State University. He joined Sandia National Laboratories in 1995 after serving as a graduate research assistant in the system software thrust at the MSU/NSF Engineering Research Center for Computational Field Simulation (now known as the High Performance Computing Collaboratory). While at Sandia, he has worked on several research and development projects associated with system software and high-performance networking for large-scale, massively parallel, distributed-memory, scientific computing systems. He has designed and developed high-performance implementations of the Message Passing Interface (MPI) standard on several platforms, including the Cray T3D and T3E, the Intel Paragon and TeraFLOPS (ASCI/Red), Sandia's Computational Plant Linux clusters, and the Cray Red Storm (XT3). His research interests include high-performance, scalable communication interfaces and protocols for system area networks, operating systems for massively parallel processing machines, and parallel program performance analysis libraries and tools. Mr. Brightwell is a Senior Member of the IEEE and the IEEE Computer Society and a Senior Member of the Association of Computing Machinery. 

Biagio Cosenza
Biagio Cosenza is Senior Researcher at Technische Universität Berlin, Germany. He is graduated from University of Salerno, Italy, where he also received his PhD degree in Computer Science, with a thesis on efficient distributed load balancing. During his doctoral studies, he has been the recipient of a DAAD scholarship at VRC (University of Stuttgart, Germany), two HPC Translational Grants (HPC-Europa2 and HPC-Europa++) at HLRS Stuttgart, and other computational grants. From 2011 to 2015, he has been Postdoctoral Researcher at the University of Innsbruck, Austria, where he contributed to the Insieme compiler and the libWater distributed runtime system. He has also supported research within the FWF DK-plus, a multidisciplinary research platform on which he collaborated with chemists, astrophysics and engineers. His current research interests at TU Berlin include compilers, high performance computing, parallel algorithms, heterogeneous and massively parallel architectures. 

Paul Messina
Dr. Paul Messina is Advisor to the Associate Laboratory Director and Laboratory on Exascale and Argonne Distinguished Fellow at Argonne National Laboratory.  His current role is Project Director for the U.S. DOE Exascale Computing Project, a multi-laboratory project.  During 2008-2015 he served as Director of Science for the Argonne Leadership Computing Facility and in 2002-2004 as Distinguished Senior Computer Scientist at Argonne and as Advisor to the Director General at CERN (European Organization for Nuclear Research). 
From 1987-2002, Dr. Messina served as founding Director of California Institute of Technology's (Caltech) Center for Advanced Computing Research, as Assistant Vice President for Scientific Computing, and as Faculty Associate for Scientific Computing, Caltech. During a leave from Caltech in 1999-2000, he led the DOE-NNSA Accelerated Strategic Computing Initiative. 
In his first association with Argonne from 1973-1987, he held a number of positions in the Applied Mathematics Division and was the founding Director of the Mathematics and Computer Science Division. 

Image result for David W. Walker image
David W. Walker is Professor of High Performance Computing in the School of Computer Science and Informatics at Cardiff University. Professor Walker has conducted research into parallel and distributed applications and software environments for the past 30 years and has published over 140 papers on these subjects. Before joining Cardiff University, Professor Walker spent ten years in the United States at the California Institute of Technology, the University of South Carolina, and Oak Ridge National Laboratory. During this time, he was involved in the specification of MPI and the development of the ScaLAPACK software library. Professor Walker is co-editor of Concurrency and Computation: Practice and Experience, a principal editor of Computer Physics Communications, and serves on the editorial boards of the International Journal of High Performance Computing Applications, and the Journal of Computational Science. 


PANEL II: At the Intersection of HPC, Cloud and Big Data: 
         Moving Data Analytics to the Edge. Where, What, When, and How?

MODERATOR:    Vincenzo De Maio 
                               (Vienna University of Technology, Austria)

     Ashiq Anjum            (University of Derby, Derby, U.K. )
     Antonio Brogi          (Universita di Pisa, Italy)
     Rizos Sakellariou   (University of Manchester, U.K.)

The ever-increasing data volume generated by different types of applications, such as healthcare and different IoT based systems, requires efficient processing and storing of data coming from different sources in order to extract meaningful information from the data. Since such computational resources are not always available to perform the required processing, big-data analytics is often performed on Cloud infrastructures. However, due to the geographically distributed nature of such infrastructures, performing analytics in the Cloud may significantly increase the time needed to obtain results of analytics due to the increased network latency. As a solution to this problem, Edge Computing has been proposed. In Edge Computing, data processing is done in micro data centres that are geographically closer to the user, to minimize latency. 

However, micro data centres have limited resources in comparison to massive data centres. This leaves the research community with several problems to be addressed. First of all, due to the limited storage space available at the micro data centres, not all the data analytics needed can be stored on the Edge. Another issue is related to the granularity of the micro data centre that is required by different applications and its energy supply. 

Therefore, it is logical to pause some questions: is Edge Computing the right solution to the aforementioned issues? If not, why? If yes, how should these issues be addressed in the new computing paradigm? how much will the deployment of an application on the Edge affect its QoS? Is this solution scalable? 

In this panel, we want to discuss the challenges and the possible improvements to the current state-of-the-art in big data analytics on the Edge, and the role of HPC in addressing current and future trends. 


Image result for Ashiq AnjumAshiq Anjum is a Professor of Distributed Systems in the College of Engineering and Technology at the University of Derby. Before this he was at the Department of Computing, Imperial College London as a Research Associate. He has been working on various collaborative projects with CERN Geneva Switzerland for the last fifteen years. His areas of research include Distributed and Parallel Systems (including High Performance Computing, Grid and Cloud Computing) and scalable methods to mine large and complex datasets (Big Data Analytics). Prof. Anjum has worked on a number of research projects that have been funded by various European, American and Asian funding agencies. He has participated in numerous computing schools, meetings and conferences to present his research work. He has more than 100 peer reviewed publications to his credit. Before starting an academic career, he worked for various software multinational companies for around 7 years. Prof. Anjum has been part of several EC funded projects in distributed systems, Machine learning and data mining such as Health-e-Child (IP, FP6), neuGrid (STREP, FP7) and TRANSFORM (IP, FP7). He recently got funding from RCUK to provide a large scale cloud computing based Video Analytics (VaaS) platform for surveillance and object tracking. He is also actively working with a leading Pharma company to propose next generation solutions in Clinical Intelligence and Integration, Clinical and Genomics Data Integration, Iterative Genome Analytics, Metadata Catalogues and Mining Disease registries. 

Image result for antonio brogiAntonio Brogi is a full professor at the Department of Computer Science, University of Pisa (Italy) since 2004. He holds a Ph.D. in Computer Science (1993) from the University of Pisa. His research interests include service-oriented, cloud-based and fog computing, coordination and adaptation of software elements, and formal methods. He has published the results of his research in over 150 papers in international journals and conferences. He has recently coordinated the “Through the fog” project funded by the University of Pisa. 

Image result for Vincenzo De Maio imageVincenzo De Maio is a Postdoctoral Researcher at the Institute of Software Technology and Interactive Systems, Vienna University of Technology (TU Wien). Before joining TU Wien in 2017, he was a research fellow at the University of Salerno (UNISA). He got a MSc degree in Computer Science from UNISA (2011) and a PhD degree in Computer Science at the University of Innsbruck in 2016. His PhD thesis is titled "Virtual Machine Migration Energy Consumption Simulation in Cloud Computing". He authored different conference and journal papers in topic of Cloud and Edge computing.

Related imageRizos Sakellariou holds a position as a Senior Lecturer (equivalent to Associate Professor in a 3-tier faculty system) with the University of Manchester, UK. He has been involved with parallel computing since 1992, with his first exposure being programming of a KSR1 supercomputer as part of his MSc project. Following a PhD in 1997 for a thesis titled “On the Quest for Perfect Load Balance in Loop-Based Parallel Computations”, he worked as a postdoc from 1998-1999 with the Center for Research on Parallel Computation at Rice University and joined the faculty of the School of Computer Science at the University of Manchester in 2000 where he has been since, currently leading a research laboratory that over the last ten years has hosted about 30 doctoral students, researchers and visitors. He has carried out research on a number of topics, including parallelizing compilers, performance prediction, load balancing, scheduling, resource allocation, multithreading, and more recently cloud management and scientific workflows. He has published over 130 papers in refereed journals and conference proceedings, which have attracted over 4000 Google scholar citations, has successfully supervised six doctoral students, and has been involved with 17 funded projects. He has been actively involved in the community having participated in the organization of over 130 conferences, primarily as a PC member, including several years of service in conferences such as IPDPS, Supercomputing, CCGrid and more. He is a member of the Steering Committee of EuroPar, the premier European conference on all aspects of parallel processing, as well as a founding member of the Steering Committee of the newly established Euro-EDUPAR, looking into teaching Parallel and Distributed Computing principles for undergraduate students.