BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Date iCal//NONSGML kigkonsult.se iCalcreator 2.20.2//
METHOD:PUBLISH
X-WR-CALNAME;VALUE=TEXT:Eventi DIAG
BEGIN:VTIMEZONE
TZID:Europe/Paris
BEGIN:STANDARD
DTSTART:20241027T030000
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20240331T020000
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:calendar.28345.field_data.0@www.diag.uniroma1.it
DTSTAMP:20260404T130129Z
CREATED:20240628T085506Z
DESCRIPTION:Abstract: Many datasets are best represented as graphs of entit
 ies connected by relationships rather than as a single uniform dataset or 
 table. Graph Neural Networks (GNNs) have been used to achieve state-of-the
 -art performance in tasks such as classification and link prediction. This
  talk will discuss recent research on scalable GNN training.The talk will 
 focus on the popular mini-batch approach to GNN training\, where each iter
 ation consists of three steps: sampling the k-hop neighbors of the mini-ba
 tch\, loading the samples onto the GPUs\, and training. The first part of 
 the talk will discuss NextDoor\, which showed for the first time that we c
 an significantly speed up end-to-end GNN training by using GPU-based sampl
 ing. To maximize the utilization of GPU resources and speed up sampling\, 
 NextDoor proposes a new form of parallelism\, called transit parallelism. 
 The second part of the talk focuses on a new approach called split paralle
 lism to run the entire mini-batch training pipeline on GPUs. It presents a
  system called GSplit that avoids redundant data loads and has all GPUs pe
 rform sampling and training cooperatively on the same GPU. Finally\, the l
 ast part of the talk will discuss results from an experimental comparison 
 between full-graph and mini-batch training systems.Short Bio: Marco Serafi
 ni is an assistant professor at the Manning College of Information and Com
 puter Sciences at UMass Amherst. He works on systems for graph learning\, 
 mining\, and data management (e.g.\, the Arabesque\, LiveGraph\, NextDoor\
 , and GSplit projects)\, cloud data management systems\, (e.g.\, Accordion
 \, E-Store\, and Clay)\, and big-data systems\, including contributions to
  the Apache Zookeeper and Storm projects. He was on the Program Committees
  of major conferences in systems and database management\, including SOSP\
 , OSDI\, Eurosys\, SIGMOD\, ASPLOS\, VLDB\, and ICDE\, among others\, the 
 Program Chair of the LADIS and APSys workshops\, and AE for SIGMOD.
DTSTART;TZID=Europe/Paris:20240703T113000
DTEND;TZID=Europe/Paris:20240703T113000
LAST-MODIFIED:20240628T091336Z
LOCATION:Aula Magna DIAG\, Via Ariosto 25
SUMMARY:Parallelizing GPU-based Mini-Batch Graph Neural Network Training - 
 Marco Serafini
URL;TYPE=URI:https://www.diag.uniroma1.it/node/28345
END:VEVENT
END:VCALENDAR
