This workshop provides an overview of implementation strategies and tooling for processing the massive computation of deep neural networks on programmable devices. The AMD adaptive SoCs are a suitable target for these approaches where, dependent on the device family, parts of the processing can be mapped to FPGA fabric (programmable logic implementation), to specific vector processing elements (AI Engines in Versal™) or the hard IP CPUs these devices offer. Revisiting deep neural networks we deploy a Python-based modelling framework to create the topology and supporting functions to train a convolutional neural network and apply unknown data. Design strategies will be presented to run such readily trained networks on edge class devices, starting from actual HLS level implementations to mapping neural networks on predefined vectorized IP eith Vitis™ AI. These concepts deploy directly on adaptive SoCs while these also apply to traditional FPGA targets with some supporting infrastructure added. This shows how to implement a wide range of custom models, still the course further presents fast track model evaluation by picking pretrained models from Vitis™ AI Model Zoo. It closes with deploying these in the Zynq™ Ultrascale+™ MPSoC family's Target Reference Designs (TRD).
9/15/2025 - 9/17/2025 Time Zone : (GMT+01:00) Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna Seats Remaining : 6 Venue : Online - PLC2 Address :
11/19/2025 - 11/21/2025 Time Zone : (GMT+01:00) Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna Seats Remaining : 6 Venue : Online - PLC2 Address :