GPU Programming with OpenACC Training Course
OpenACC is an open standard for heterogeneous programming that enables code to run on different platforms and devices, such as multicore CPUs, GPUs, FPGAs, and others.
This instructor-led, live training (online or onsite) is aimed at beginner-level to intermediate-level developers who wish to use OpenACC to program heterogeneous devices and exploit their parallelism.
By the end of this training, participants will be able to:
- Set up an OpenACC development environment.
- Write and run a basic OpenACC program.
- Annotate code with OpenACC directives and clauses.
- Use OpenACC API and libraries.
- Profile, debug, and optimize OpenACC programs.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Course Outline
Introduction
- What is OpenACC?
- OpenACC vs OpenCL vs CUDA vs SYCL
- Overview of OpenACC features and architecture
- Setting up the development environment
Getting Started
- Creating an OpenACC project in Visual Studio Code
- Exploring project structure and files
- Compiling and running the program
- Displaying output with printf and fprintf
OpenACC Directives and Clauses
- Understanding OpenACC directives and clauses
- Using parallel directives for creating parallel regions
- Using kernels directives for compiler-managed parallelism
- Using loop directives for parallelizing loops
- Managing data movement with data directives
- Synchronizing data with update directives
- Improving data reuse with cache directives
- Creating device functions with routine directives
- Synchronizing events with wait directives
OpenACC API
- Understanding the role of OpenACC API
- Querying device information and capabilities
- Setting device number and type
- Handling errors and exceptions
- Creating and synchronizing events
OpenACC Libraries and Interoperability
- Understanding OpenACC libraries and interoperability
- Using math, random, and complex libraries
- Integrating with other models (CUDA, OpenMP, MPI)
- Integrating with GPU libraries (cuBLAS, cuFFT)
OpenACC Tools
- Understanding OpenACC tools in development
- Profiling and debugging OpenACC programs
- Performance analysis with PGI Compiler, NVIDIA Nsight Systems, Allinea Forge
Optimization
- Factors affecting OpenACC program performance
- Optimizing data locality and reducing transfers
- Optimizing loop parallelism and fusion
- Optimizing kernel parallelism and fusion
- Optimizing vectorization and auto-tuning
Summary and Next Steps
Requirements
- An understanding of C/C++ or Fortran language and parallel programming concepts
- Basic knowledge of computer architecture and memory hierarchy
- Experience with command-line tools and code editors
Audience
- Developers who wish to learn how to use OpenACC to program heterogeneous devices and exploit their parallelism
- Developers who wish to write portable and scalable code that can run on different platforms and devices
- Programmers who wish to explore the high-level aspects of heterogeneous programming and optimize their code productivity
Open Training Courses require 5+ participants.
GPU Programming with OpenACC Training Course - Booking
GPU Programming with OpenACC Training Course - Enquiry
GPU Programming with OpenACC - Consultancy Enquiry
Upcoming Courses
Related Courses
Developing AI Applications with Huawei Ascend and CANN
21 HoursHuawei Ascend comprises a series of AI processors engineered for high-efficiency inference and training tasks.
This instructor-led, live training (available online or on-site) targets intermediate-level AI engineers and data scientists seeking to create and optimize neural network models using Huawei’s Ascend platform alongside the CANN toolkit.
Upon completion of this training, participants will be capable of:
- Configuring and setting up the CANN development environment.
- Creating AI applications through MindSpore and CloudMatrix workflows.
- Enhancing performance on Ascend NPUs via custom operators and tiling techniques.
- Deploying models to both edge and cloud environments.
Course Format
- Interactive lectures coupled with discussions.
- Practical application of Huawei Ascend and the CANN toolkit within sample applications.
- Directed exercises centered on model construction, training, and deployment.
Customization Options
- To arrange customized training tailored to your specific infrastructure or datasets, please reach out to us.
Deploying AI Models with CANN and Ascend AI Processors
14 HoursCANN (Compute Architecture for Neural Networks) serves as Huawei's AI compute stack, designed for the deployment and optimization of AI models on Ascend AI processors.
This instructor-led live training, available both online and onsite, targets intermediate-level AI developers and engineers who aim to efficiently deploy trained AI models to Huawei Ascend hardware. The curriculum focuses on utilizing the CANN toolkit along with tools such as MindSpore, TensorFlow, or PyTorch.
Upon completion of this training, participants will be able to:
- Grasp the CANN architecture and its critical role within the AI deployment pipeline.
- Convert and adapt models from popular frameworks into Ascend-compatible formats.
- Utilize tools like ATC, OM model conversion, and MindSpore for inference on both edge and cloud environments.
- Troubleshoot deployment issues and optimize performance on Ascend hardware.
Course Format
- Interactive lectures and demonstrations.
- Hands-on lab exercises utilizing CANN tools and Ascend simulators or devices.
- Practical deployment scenarios grounded in real-world AI models.
Course Customization Options
- To request a customized training session for this course, please contact us to arrange details.
AI Inference and Deployment with CloudMatrix
21 HoursCloudMatrix serves as Huawei's unified platform for developing and deploying AI solutions, specifically engineered to support scalable, production-grade inference pipelines.
This instructor-led live training, available in both online and onsite formats, is tailored for beginner to intermediate-level AI professionals looking to deploy and monitor AI models using the CloudMatrix platform, with seamless integration of CANN and MindSpore.
Upon completion of this training, participants will gain the ability to:
- Leverage CloudMatrix for packaging, deploying, and serving models.
- Convert and optimize models for Ascend chipsets.
- Establish pipelines for both real-time and batch inference tasks.
- Monitor deployments and fine-tune performance in production environments.
Course Format
- Interactive lectures and discussions.
- Practical application of CloudMatrix through real-world deployment scenarios.
- Guided exercises focusing on conversion, optimization, and scaling.
Customization Options
- For customized training based on your specific AI infrastructure or cloud environment, please contact us to arrange a session.
GPU Programming on Biren AI Accelerators
21 HoursBiren AI Accelerators are high-performance GPUs designed for AI and HPC workloads with support for large-scale training and inference.
This instructor-led, live training (online or onsite) is aimed at intermediate-level to advanced-level developers who wish to program and optimize applications using Biren’s proprietary GPU stack, with practical comparisons to CUDA-based environments.
By the end of this training, participants will be able to:
- Understand Biren GPU architecture and memory hierarchy.
- Set up the development environment and use Biren’s programming model.
- Translate and optimize CUDA-style code for Biren platforms.
- Apply performance tuning and debugging techniques.
Format of the Course
- Interactive lecture and discussion.
- Hands-on use of Biren SDK in sample GPU workloads.
- Guided exercises focused on porting and performance tuning.
Course Customization Options
- To request a customized training for this course based on your application stack or integration needs, please contact us to arrange.
Cambricon MLU Development with BANGPy and Neuware
21 HoursCambricon MLUs (Machine Learning Units) are specialized AI chips optimized for inference and training in edge and datacenter scenarios.
This instructor-led, live training (online or onsite) is aimed at intermediate-level developers who wish to build and deploy AI models using the BANGPy framework and Neuware SDK on Cambricon MLU hardware.
By the end of this training, participants will be able to:
- Set up and configure the BANGPy and Neuware development environments.
- Develop and optimize Python- and C++-based models for Cambricon MLUs.
- Deploy models to edge and data center devices running Neuware runtime.
- Integrate ML workflows with MLU-specific acceleration features.
Format of the Course
- Interactive lecture and discussion.
- Hands-on use of BANGPy and Neuware for development and deployment.
- Guided exercises focused on optimization, integration, and testing.
Course Customization Options
- To request a customized training for this course based on your Cambricon device model or use case, please contact us to arrange.
Introduction to CANN for AI Framework Developers
7 HoursCANN (Compute Architecture for Neural Networks) serves as Huawei’s AI computing toolkit, designed to compile, optimize, and deploy AI models on Ascend AI processors.
This instructor-led live training (available online or onsite) targets beginner-level AI developers seeking to grasp how CANN integrates into the model lifecycle from training to deployment, and its interoperability with frameworks such as MindSpore, TensorFlow, and PyTorch.
Upon completing this training, participants will be capable of:
- Comprehending the purpose and architecture of the CANN toolkit.
- Configuring a development environment utilizing CANN and MindSpore.
- Converting and deploying a basic AI model onto Ascend hardware.
- Acquiring foundational knowledge to support future CANN optimization or integration initiatives.
Course Format
- Interactive lectures and discussions.
- Practical hands-on labs focused on simple model deployment.
- Step-by-step guidance through the CANN toolchain and integration points.
Customization Options
- For customized training arrangements, please contact us.
CANN for Edge AI Deployment
14 HoursHuawei's Ascend CANN toolkit empowers powerful AI inference on edge devices, including the Ascend 310. It offers essential tools for compiling, optimizing, and deploying models in environments where computing power and memory are limited.
This instructor-led live training (available online or onsite) is designed for intermediate AI developers and integrators looking to deploy and optimize models on Ascend edge devices using the CANN toolchain.
Upon completion of this training, participants will be able to:
- Prepare and convert AI models for the Ascend 310 using CANN tools.
- Construct lightweight inference pipelines utilizing MindSpore Lite and AscendCL.
- Enhance model performance in resource-constrained environments.
- Deploy and monitor AI applications in real-world edge scenarios.
Course Format
- Interactive lectures and demonstrations.
- Practical lab exercises focused on edge-specific models and scenarios.
- Live deployment examples on virtual or physical edge hardware.
Customization Options
- For a customized version of this course, please contact us to make arrangements.
Understanding Huawei’s AI Compute Stack: From CANN to MindSpore
14 HoursHuawei’s AI infrastructure — spanning the low-level CANN SDK to the high-level MindSpore framework — delivers a seamlessly integrated environment for developing and deploying AI solutions, specifically optimized for Ascend hardware.
This instructor-led, live training (available online or on-site) targets beginner to intermediate technical professionals aiming to understand how CANN and MindSpore collaborate to manage the AI lifecycle and inform infrastructure strategies.
Upon completion of this training, participants will be able to:
- Grasp the layered architecture of Huawei’s AI computing stack.
- Recognize how CANN facilitates model optimization and hardware-level deployment.
- Assess the MindSpore framework and its toolset in comparison to industry standards.
- Determine where Huawei’s AI stack fits within enterprise, cloud, or on-premises environments.
Course Format
- Interactive lectures and discussions.
- Live system demonstrations and case study walkthroughs.
- Optional guided labs exploring the model flow from MindSpore to CANN.
Customization Options
- For a tailored version of this course, please contact us to arrange your requirements.
Optimizing Neural Network Performance with CANN SDK
14 HoursThe CANN SDK (Compute Architecture for Neural Networks) serves as Huawei’s foundational AI compute platform, enabling developers to fine-tune and optimize the efficiency of neural networks deployed on Ascend AI processors.
This instructor-led live training, available either online or on-site, is designed for advanced AI developers and system engineers seeking to boost inference performance using CANN’s sophisticated toolset, which includes the Graph Engine, TIK, and capabilities for custom operator development.
Upon completing this training, participants will be equipped to:
- Comprehend CANN's runtime architecture and its performance lifecycle.
- Utilize profiling tools and the Graph Engine for detailed performance analysis and optimization.
- Develop and optimize custom operators employing TIK and TVM.
- Address memory bottlenecks and enhance model throughput.
Course Format
- Interactive lectures and discussions.
- Practical labs featuring real-time profiling and operator tuning.
- Optimization exercises based on edge-case deployment scenarios.
Customization Options
- For those requiring a tailored training experience, please reach out to us to make arrangements.
CANN SDK for Computer Vision and NLP Pipelines
14 HoursThe CANN SDK (Compute Architecture for Neural Networks) offers robust deployment and optimization tools designed for real-time AI applications in computer vision and NLP, particularly on Huawei Ascend hardware.
This instructor-led, live training (available online or onsite) is tailored for intermediate-level AI professionals looking to build, deploy, and optimize vision and language models using the CANN SDK for production environments.
Upon completion of this training, participants will be able to:
- Deploy and optimize CV and NLP models using CANN and AscendCL.
- Utilize CANN tools to convert models and integrate them into live pipelines.
- Enhance inference performance for tasks such as detection, classification, and sentiment analysis.
- Construct real-time CV/NLP pipelines suitable for edge or cloud-based deployment scenarios.
Course Format
- Interactive lectures and demonstrations.
- Hands-on labs focused on model deployment and performance profiling.
- Live pipeline design using real-world CV and NLP use cases.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Building Custom AI Operators with CANN TIK and TVM
14 HoursCANN TIK (Tensor Instruction Kernel) and Apache TVM facilitate the advanced optimization and customization of AI model operators for Huawei Ascend hardware.
This instructor-led, live training session (available online or in-person) is designed for advanced system developers who aim to build, deploy, and refine custom operators for AI models using CANN’s TIK programming model and TVM compiler integration.
Upon completion of this training, participants will be able to:
- Write and test custom AI operators using the TIK DSL for Ascend processors.
- Integrate custom operators into the CANN runtime and execution graph.
- Leverage TVM for operator scheduling, auto-tuning, and benchmarking.
- Debug and optimize instruction-level performance for custom computation patterns.
Course Format
- Interactive lectures and demonstrations.
- Practical coding of operators using TIK and TVM pipelines.
- Testing and tuning on Ascend hardware or simulators.
Customization Options
- To request customized training for this course, please contact us to make arrangements.
Migrating CUDA Applications to Chinese GPU Architectures
21 HoursChinese GPU solutions, including Huawei Ascend, Biren, and Cambricon MLUs, provide alternatives to CUDA specifically designed for domestic AI and High-Performance Computing (HPC) markets.
This instructor-led training, available online or onsite, targets advanced GPU developers and infrastructure experts seeking to migrate and optimize their current CUDA applications for deployment on Chinese hardware.
Upon completion, participants will be able to:
- Assess the compatibility of existing CUDA workloads with Chinese chip alternatives.
- Port CUDA codebases to Huawei CANN, Biren SDK, and Cambricon BANGPy environments.
- Analyze performance metrics and identify key optimization opportunities across different platforms.
- Tackle practical challenges related to cross-architecture support and deployment.
Course Format
- Interactive lectures and discussions.
- Practical labs focusing on code translation and performance benchmarking.
- Guided exercises on strategies for multi-GPU adaptation.
Customization Options
- To request a tailored version of this course based on your specific platform or CUDA project, please contact us.
Performance Optimization on Ascend, Biren, and Cambricon
21 HoursAscend, Biren, and Cambricon represent the forefront of AI hardware platforms in China, each providing distinctive acceleration and profiling capabilities designed for large-scale AI workloads.
This instructor-led live training (available online or onsite) is tailored for advanced AI infrastructure and performance engineers seeking to enhance model inference and training processes across various Chinese AI chip architectures.
Upon completion of this training, participants will be equipped to:
- Conduct benchmarks on Ascend, Biren, and Cambricon platforms.
- Pinpoint system bottlenecks and inefficiencies related to memory and compute resources.
- Implement optimizations at the graph, kernel, and operator levels.
- Refine deployment pipelines to achieve superior throughput and reduced latency.
Course Format
- Interactive lectures and discussions.
- Practical application of profiling and optimization tools on each platform.
- Guided exercises centered on real-world tuning scenarios.
Course Customization Options
- For a customized training session tailored to your specific performance environment or model type, please contact us to make arrangements.