NCSA organized online training sessions throughout the Spring 2022 semester to help users to get started with deep learning projects on HAL. These sessions are designed for novice users to learn about the system and start building deep neural network models. Watch all of our training sessions below!

## Spring 2022 Sessions

**Access trainings here: https://go.ncsa.illinois.edu/CAIIHALTraining **

#### February 2:** ** Getting Started with HAL – Dawei Mu

This tutorial will introduce students to HAL system, how to interact with it through Open OnDemand interface, including Jupiter notebook, and through command line interface via SSH. The tutorial will cover the Anaconda environment, batch job submission, data transfer, as well as an overview to the basic machine learning workflow.

#### February 9: Introduction to Machine Learning and Neural Networks – Asad Khan

This tutorial will introduce students to HAL system, how to interact with it through Open OnDemand interface, including Jupiter notebook, and through command line interface via SSH. The tutorial will cover the Anaconda environment, batch job submission, data transfer, as well as an overview to the basic machine learning workflow.

#### February 16: Introduction to TensorFlow – Asad Khan

This tutorial will introduce basics of TensorFlow necessary to build a neural network, train it and evaluate the accuracy of the model

#### February 23: Machine Learning with H2O on HAL – Dawei Mu

This tutorial will cover basic machine learning techniques, such as Linear Regression, Decision Tree, Support Vector Machine, Naive Bayes, K-Nearest Neighbors, K-Means, and Random Forest, using H2O machine learning platform on HAL system.

#### March 2: TBD

#### March 9:** **TBD

#### March 23:** **TBD

#### March 30: TBD

#### April 6:** **TBD

#### April 13:** **TBD

#### April 20: TBD

#### April 27: TBD

## FAll 2021 SESSIONS

**September 8:** Getting Started with HAL – Dawei Mu

This tutorial will introduce students to HAL system, how to interact with it through Open OnDemand interface, including Jupiter notebook, and through command line interface via SSH. The tutorial will cover the Anaconda environment, batch job submission, data transfer, as well as basics of machine learning.

**Training Slides**: Slides

**September 15:** Hands-On Deep Learning for Computer Vision – Asad Khan

This tutorial will introduce how machine learning can be accomplished with neural networks and will go over various examples from simple dense networks to convolutional network architectures using TensorFlow on HAL system.

**Training Instructions:**Google Doc**GitHub Repository Notebook:****Github Notebook**

**September 22:** Intro to TensorFlow – Asad Khan

This tutorial will introduce basics of TensorFlow necessary to build a neural network, train it and evaluate the accuracy of the model.

**Training Instructions:**Google Doc**GitHub Repository Notebook:**GitHub Notebook

**September 29: **Intro to PyTorch – Yao-Yu Lin

This tutorial will teach how to build and train neural networks in PyTorch on HAL.

**Training Slides:**Slides**GitHub:**GitHub Notebook

**October 6: **Data Loaders – William Eustis

The main objective of this tutorial is to show how to use data loaders provided with PyTorch and how to develop application-specific data loaders.

**October 13: **Machine learning with H2O on HAL — Dawei Mu

This tutorial will cover basic machine learning techniques, such as Linear Regression, Decision Tree, Support Vector Machine, Naive Bayes, K-Nearest Neighbors, K-Means, and Random Forest, using on HAL system.

**Training Slides:**Slides

**October 20: **Distributed deep learning on HAL — Dawei Mu

This tutorial will explain how to use multiple GPUs for accelerating deep learning on HAL using frameworks such as TensorFlow or PyTorch.

**Training Slides:**Slides

**October 27: **Introduction to Natural Language Processing (NLP) with PyTorch – VOLODYMYR KINDRATENKO

This tutorial will introduce neural network architectures for processing natural language texts. We will cover Recurrent Neural Networks (RNNs) and Generative Neural Networks (GNNs), attention mechanisms, and how to build text classification models.

**November 3: **OpenVINO™ toolkit integration with Tensorflow with hands-on practice Intel Devcloud – Kumar Vishwesh & Yamini Mimmagadda

This tutorial will introduce Tensorflow integration available with the OpenVINO™ toolkit and hands-on practice in the Intel® DevCloud explaining the minimal code changes needed with Tensorflow to gain greater inference performance with OpenVINO. Suggested to pre-register for access to the Intel® DevCloud here. Access is free.

**DevCloud example instructions:**Instructions**GitHub Notebook:**OpenVINO TensorFlow Classification Example**GitHub Notebook:**OpenVINO TensorFlow Object Detection Example

**November 10: **Robust Physics Informed Neural Networks — Avik Roy

Physics Informed Neural Networks (PINNs) have recently been found to be effective PDE solvers. This talk will focus on how traditional PINN architectures along with physics-inspired regularizers fail to retrieve the intended solution when training data is noisy and how this problem can be solved using Gaussian Process based smoothing techniques.

**Training Slides:**Slides**arxiv:**2110.13330**GitHub:**Notebook

**November 17: **Physics Informed Deep Learning — Shawn Rosofsky

This tutorial will explore how to incorporate physics into deep learning models with various examples ranging from using physics informed neural networks (PINNs) for forward and inverse problems to employing physics informed DeepONets for a hybrid data and physics approach to problems.