AI & Deep Learning
Sub brand
Back to top

Introduction

Date: July 29, 2021
Time: 9:30am – 12:00pm PT
Duration: 2.5 hours


Develop and Optimize Deep Learning Recommender Systems: Insights and Best Practices from NVIDIA, Facebook, and TensorFlow


Join fellow machine learning engineers and data scientists on July 29 for an engaging online conversation with experts who are building hardware and software that make recommenders run efficiently at scale.


By joining this Deep Learning Recommender Summit, you'll hear from fellow ML engineers and data scientists from NVIDIA, Facebook, and TensorFlow on best practices, learnings, and insights for building and optimizing highly effective DL recommender systems.


Sessions


High-Performance Recommendation Model Training at Facebook

Deep learning recommendation models are the single largest AI application at Facebook which consumes the highest number of compute cycles at our large-scale data-centers. Meanwhile, training recommendation models with GPU is challenging as the model often contains large embedding tables thus carries compute-intensive, memory intensive and communication-intensive components. In this talk, we will first analyze how model architecture affects the GPU performance and efficiency, and also present the performance optimizations techniques we applied to improve the GPU utilization, which includes optimized PyTorch-based training stack supporting both model and data parallelism, high-performance GPU operators, efficient embedding table sharding, memory hierarchy and pipelining.


RecSys2021 Challenge Predicting User Engagements with Deep Learning Recommender Systems

The NVIDIA team, a collaboration of Kaggle Grandmaster and NVIDIA Merlin, won the RecSys2021 challenge. It was hosted by Twitter, who provided almost 1 billion tweet-user pairs as a dataset. The team will present their winning solution with a focus on deep learning architectures and how to optimize them.


Revisiting Recommender Systems on GPU

In this talk we’ll explore changes in GPU hardware within the last generation that make it much better suited to the recommendation problem, along with improvements on the software side that take advantage of optimizations only possible in the recommendation domain. A new era of faster ETL, Training and Inference is coming to the RecSys space and this talk will walk through some of the patterns of optimization that guide the tools we’re building to make recommenders faster and easier to use on the GPU.


TensorFlow Recommenders

TensorFlow Recommenders is an end-to-end library for recommender system models: from retrieval, through ranking, to post-ranking. In this talk, we describe how TensorFlow Recommenders can be used to fit and safely deploy sophisticated recommender systems at scale.

WEBINAR REGISTRATION

THANK YOU FOR REGISTERING FOR THE WEBINAR



You will receive an email with instructions on how to join the webinar shortly.

Main Content

maincontent goes here

Content

Content goes here

Content

content goes here

main image description

Content

Content goes here

Content

DGX Station Datasheet

Get a quick low-down and technical specs for the DGX Station.
DGX Station Whitepaper

Dive deeper into the DGX Station and learn more about the architecture, NVLink, frameworks, tools and more.
DGX Station Whitepaper

Dive deeper into the DGX Station and learn more about the architecture, NVLink, frameworks, tools and more.

Content

Content goes here

Speakers

Maciej Kula

Engineer, Google Research

Maciej is an engineer working on recommender systems at Google Research. He helps maintain several libraries for recommender systems, including TensorFlow Recommenders and LightFM.

Xiaodong Wang

Research Scientist, AI Infra at Facebook

Xiaodong Wang is a research scientist at Facebook. He received his PhD degree from Cornell University in 2017, and BE from Shanghai Jiao Tong University in 2011. Since joining Facebook in 2017, he has been working on various GPU-base projects, such as performance characterization and evaluation for emerging AI workloads, optimizing and productionization recommendation models on GPU.

Jade Nie

Research Scientist, AI Infra at Facebook

Jade Nie is a research scientist at Facebook AI Infra. She received the PhD degree from Princeton University in 2019 and BE from Tsinghua University in 2014. After joining Facebook in 2019, she has been working on building high-performance GPU-based training platforms for Facebook’s recommendation models.

Even Oldridge

Sr. Manager - RecSys Platform Team

Even Oldridge is a senior applied research scientist at NVIDIA and leads the team developing NVTabular. He has a PhD in Computer Vision but has spent the last five years working in the recommender system space with a focus on deep learning–based recommender systems.

Deep Learning Engineer, NVIDIA

Deep Learning Engineer, NVIDIA

Benedikt Schifferer is a deep learning engineer at NVIDIA working on recommender systems. Prior to his work at NVIDIA, he graduated with a master of science in data science from Columbia University, New York and developed recommender systems for a German ecommerce company.

Senior Data Scientist, NVIDIA

Senior Data Scientist, NVIDIA

Chris Deotte earned a BA in mathematics then worked as a graphic artist, photographer, carpenter, and teacher. He also earned a PhD in computational science and mathematics with a thesis on optimizing parallel processing and now works as a data scientist and researcher. Chris is a 4x Kaggle Grandmaster.

Additional Speakers

Bo Liu headshot-sq.jpg

Bo Liu

Senior Deep Learning Data Scientist, NVIDIA
Gilberto-sq.jpg

Gilberto Titericz

Senior Data Scientist, NVIDIA

ronay-200.jpg

Ronay Ak

Sr. Data Scientist, NVIDIA

Content Title

Content here

Register

Webinar: Description here

Date & Time: Wednesday, April 22, 2018