School on Parallel Programming for High Performance Computing (December 2-13, 2019)

School on Data Science and Machine Learning (December 16-20, 2019)

Preliminary Proposals

Introductory School on Parallel Programming and Parallel Architecture for High Performance Computing

Introduction

The School has the goal of teaching participating scientists about modern computer hardware and programming to provide a foundation for future computational research using High Performance Computing (HPC). Participants will go through an intensive programme with a focus on practical skills.

School participants will learn to improve the efficiency of their research codes, and to parallelize them. Lectures on a selection of technical aspects of modern HPC hardware will be mixed with introductions to widely used parallel programming tools and libraries. The hands-on sessions will allow participants to practice on small example problems of general scientific interest. Example topics will cover numerical methods and parallel strategies, as well as data management.

The programme specifically addresses the needs of scientists using, writing, or modifying HPC applications, and will not assume, require, or provide significant IT and HPC resource management skills.

It will be mainly based on fundamental HPC-relevant features in widely used scientific software for high-performance computing:

  • Computer architectures for HPC and how to optimize for them
  • Parallel programming tools (MPI & OpenMP)
  • Portable, flexible and parallel I/O (HDF5)
  • Parallel programming best practices
  • Floating-point math
  • High-performance libraries for the solution of common math problems

The school will be organized in association with the International Centre for Theoretical Physics (ICTP/Trieste), the ICTP/SAIFR and the Center for Scientific Computing (NCC/Unesp)

Program

The program will closely follow the one from previous schools (see Appendix). We intend to add some more advance content mainly on Machine/Deep Learning.

Tentative Lectures

  • W. Bangerth, Colorado State University
  • R. Berger, Temple University, Philadelphia
  • G. Pringle, EPCC, Edinburgh
  • I. Girotto, ICTP Trieste
  • S. Stanzani, NCC Unesp
  • R. Iope, NCC Unesp
  • R. Cobe, NCC Unesp

Local Organizers

  • Nathan Berkovits (ICTP-SAIFR/IFT-UNESP, Brazil)
  • Raphael Cobe (NCC, Brazil)
  • Ivan Girotto (ICTP-Trieste, Italy)
  • Rogério Iope (NCC, Brazil)
  • Beraldo Leal (NCC, Brazil)
  • Sérgio Novaes (NCC and SPRACE, Brazil)

School on Data Science and Machine Learning for High Energy Physics

Introduction

The school has the goal of teaching participants about modern machine learning techniques, their strengths and shortcomings, and how to apply them in the context of High-Energy Physics (HEP). The school is targeted particularly at senior PhD students, working towards the completion of their thesis projects, as well as young postdocs.

School participants will learn the formalism of machine learning, starting from an introductory level and going through more advanced topics like computer vision, sequential and recursive learning, anomaly and outlier detectors, and adversarial networks. Those academic lectures will be mixed with a set of hands-on sessions where the students will be able to apply the concepts to solving real-world problems in HEP: detector simulation, track finding, jet tagging.

The present proposal follows in many aspects the previous events in this subject like Data Science in High Energy Physics series (DS@HEP):

The school will be organized in association with the California Institute of Technology (Caltech) and the Center for Scientific Computing (NCC/UNESP).

Lecture Topics

  • Introduction to Machine Learning
    • Regression vs. classification
    • (Boosted) decision trees
    • Neural networks
    • Deep neural networks
  • Computer Vision
    • Jet imaging
    • Convolutional neural networks
    • b-tagging, V-tagging, top-tagging
  • Sequential / Recursive Learning
    • Recursive neural networks
    • Long Short-term Memory (LSTM) networks
  • Anomaly / Outlier Detection
    • Data quality monitoring
    • Data certification
    • Hardware control
  • Adversarial Networks
    • Generative Adversarial Networks (GAN)
    • Detector simulation with GAN

Tentative Program

Monday Tuesday WednesdaySorted ascending Thursday Friday
10:30-11:00 Coffee-break Coffee-break Coffee-break Coffee-break Coffee-break
16:00-16:30 Coffee-break Coffee-break Coffee-break Coffee-break Coffee-break
14:30-16:00 Hands-on Hands-on Hands-on Hands-on Summary Presentation
16:30-18:00 Hands-on Hands-on Hands-on Hands-on Summary Presentation
12:30-14:30 Lunch Lunch Lunch Lunch Lunch
09:00-10:30 Introduction to ML Computer Vision. Sequential/Recursive Learning Anomaly Outlier Detection Adversarial Network
11:00-12:30 Introduction to ML Computer Vision. Sequential/Recursive Learning Anomaly Outlier Detection Adversarial Network

Tentative Lecturers

  • P. Perona (California Institute of Technology, USA)
  • K. Cho (New York University)
  • C. Germain (Université Paris-Sud)
  • P. Balaprakash (Argonne National Laboratory)
  • S. Gleyzer (University of Florida)
  • M. Pierini (CERN)

Local Organizers

  • Maria Spiropulu (Californa Institute of Technology, USA)
  • Sergio Novaes (NCC-UNESP, Brazil)
  • Nathan Berkovits (ICTP-SAIFR/UNESP, Brazil)
  • Rogério Iope (NCC-UNESP, Brazil)
  • Thiago Tomei (NCC-UNESP, Brazil)
  • Raphael Cobe (NCC-UNESP, Brazil)

Organization Schedule

Important Deadline

01.June Website
15.June Start online application
15.July Final version of the poster
01.August Distribution of the posters to several institutes worldwide
Until 22.September Online announcements sent to physical societies, websites, social media, funding agencies and former participants
29.September Application deadline
30.September-07.October Application evaluation and ranking
07.October Acceptance notification and start to organize the visits
07.October Final version of the budget
07.October-18.November Organization of the visits and logistics (Visa letters, Travel, Lodging)
11.November Final version of the program at the website
2-13. December School on Parallel Programming for High Performance Computing
16-20. December School on Data Science and Machine Learning

Follow-up

  • 01.June : Website
  • 15.June : Start online application
  • 15.July : Final version of the poster
  • 01.August : Distribution of the posters to several institutes worldwide
  • Until 22.September : Online announcements sent to physical societies, websites, facebook, twitter, funding agencies and emails to former participants.
  • 29.September : Application deadline
  • 30.September-07.October : Application evaluation and ranking
  • 07.October : Acceptance notification and start to organize the visits
  • 07.October : Final version of the budget
  • 07.October-18.November : Organization of the visits and logistics (Visa letters, Travel, Lodging)
  • 11.November : Final version of the program at the website
  • 2-13. December : School on Parallel Programming for High Performance Computing
  • 16-20. December : School on Data Science and Machine Learning

Contact Persons

  • Overall organization of the event: Jandira Ferreira de Oliveira <jandira@ictp-saifr.org>
  • Visa letter, accommodation, information on transportation: Humberto <secretary@ictp-saifr.org>

Links

-- novaes - 2016-05-17

Comments

Topic revision: r4 - 2018-12-11 - novaes
 

This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback

antalya escort bursa escort eskisehir escort istanbul escort izmir escort