AI Inference and Deployment with CloudMatrix Training Course
CloudMatrix is Huawei’s unified AI development and deployment platform designed to support scalable, production-grade inference pipelines.
This instructor-led, live training (online or onsite) is aimed at beginner-level to intermediate-level AI professionals who wish to deploy and monitor AI models using the CloudMatrix platform with CANN and MindSpore integration.
By the end of this training, participants will be able to:
- Utilize CloudMatrix for model packaging, deployment, and serving.
- Convert and optimize models for Ascend chipsets.
- Establish pipelines for real-time and batch inference tasks.
- Monitor deployments and fine-tune performance in production settings.
Format of the Course
- Interactive lecture and discussion.
- Practical use of CloudMatrix with real deployment scenarios.
- Guided exercises focused on conversion, optimization, and scaling.
Course Customization Options
- To request a customized training for this course based on your AI infrastructure or cloud environment, please contact us to arrange.
Course Outline
Introduction to Huawei CloudMatrix
- CloudMatrix ecosystem and deployment flow
- Supported models, formats, and deployment modes
- Typical use cases and supported chipsets
Preparing Models for Deployment
- Model export from training tools (MindSpore, TensorFlow, PyTorch)
- Using ATC (Ascend Tensor Compiler) for format conversion
- Static vs dynamic shape models
Deploying to CloudMatrix
- Service creation and model registration
- Deploying inference services via UI or CLI
- Routing, authentication, and access control
Serving Inference Requests
- Batch vs real-time inference flows
- Data preprocessing and postprocessing pipelines
- Calling CloudMatrix services from external apps
Monitoring and Performance Tuning
- Deployment logs and request tracking
- Resource scaling and load balancing
- Latency tuning and throughput optimization
Integration with Enterprise Tools
- Connecting CloudMatrix with OBS and ModelArts
- Using workflows and model versioning
- CI/CD for model deployment and rollback
End-to-End Inference Pipeline
- Deploying a complete image classification pipeline
- Benchmarking and validating accuracy
- Simulating failover and system alerts
Summary and Next Steps
Requirements
- An understanding of AI model training workflows
- Experience with Python-based ML frameworks
- Basic familiarity with cloud deployment concepts
Audience
- AI ops teams
- Machine learning engineers
- Cloud deployment specialists working with Huawei infrastructure
Open Training Courses require 5+ participants.
AI Inference and Deployment with CloudMatrix Training Course - Booking
AI Inference and Deployment with CloudMatrix Training Course - Enquiry
AI Inference and Deployment with CloudMatrix - Consultancy Enquiry
Consultancy Enquiry
Testimonials (1)
Step by step training with a lot of exercises. It was like a workshop and I am very glad about that.
Ireneusz - Inter Cars S.A.
Course - Intelligent Applications Fundamentals
Upcoming Courses
Related Courses
Developing AI Applications with Huawei Ascend and CANN
21 HoursHuawei Ascend is a series of AI processors designed for high-performance inference and training.
This instructor-led, live training (available both online and onsite) is aimed at intermediate-level AI engineers and data scientists who wish to develop and optimize neural network models using Huawei’s Ascend platform and the CANN toolkit.
By the end of this training, participants will be able to:
- Set up and configure the CANN development environment.
- Develop AI applications using MindSpore and CloudMatrix workflows.
- Optimize performance on Ascend NPUs by utilizing custom operators and tiling techniques.
- Deploy models to edge or cloud environments.
Format of the Course
- Interactive lecture and discussion sessions.
- Hands-on experience with Huawei Ascend and the CANN toolkit in sample applications.
- Guided exercises focused on building, training, and deploying models.
Course Customization Options
- To request a customized training for this course tailored to your infrastructure or datasets, please contact us to arrange.
Deploying AI Models with CANN and Ascend AI Processors
14 HoursCANN (Compute Architecture for Neural Networks) is Huawei’s AI computing stack designed for deploying and optimizing AI models on Ascend AI processors.
This instructor-led, live training (available online or onsite) is aimed at intermediate-level AI developers and engineers who wish to efficiently deploy trained AI models to Huawei Ascend hardware using the CANN toolkit and tools such as MindSpore, TensorFlow, or PyTorch.
By the end of this training, participants will be able to:
- Understand the architecture of CANN and its role in the AI deployment process.
- Convert and adapt models from popular frameworks to formats compatible with Ascend.
- Utilize tools like ATC, OM model conversion, and MindSpore for inference on edge devices and in the cloud.
- Diagnose deployment issues and optimize performance on Ascend hardware.
Format of the Course
- Interactive lecture and demonstration.
- Hands-on lab work using CANN tools and Ascend simulators or devices.
- Practical deployment scenarios based on real-world AI models.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
AI Engineering Fundamentals
14 HoursThis instructor-led, live training in Slovakia (online or onsite) is aimed at beginner-level to intermediate-level AI engineers and software developers who wish to gain a foundational understanding of AI engineering principles and practices.
By the end of this training, participants will be able to:
- Understand the core concepts and technologies behind AI and machine learning.
- Implement basic machine learning models using TensorFlow and PyTorch.
- Apply AI techniques to solve practical problems in software development.
- Manage and maintain AI projects using best practices in AI engineering.
- Recognize the ethical implications and responsibilities involved in developing AI systems.
GPU Programming on Biren AI Accelerators
21 HoursBiren AI Accelerators are high-performance GPUs designed for AI and HPC workloads, supporting large-scale training and inference.
This instructor-led, live training (online or on-site) is aimed at intermediate to advanced developers who wish to program and optimize applications using Biren’s proprietary GPU stack, with practical comparisons to CUDA-based environments.
By the end of this training, participants will be able to:
- Understand the architecture and memory hierarchy of Biren GPUs.
- Set up the development environment and utilize Biren’s programming model.
- Translate and optimize CUDA-style code for Biren platforms.
- Apply performance tuning and debugging techniques.
Format of the Course
- Interactive lecture and discussion.
- Hands-on use of Biren SDK in sample GPU workloads.
- Guided exercises focused on porting and performance tuning.
Course Customization Options
- To request a customized training for this course based on your application stack or integration needs, please contact us to arrange.
Building Intelligent Applications with AI and ML
28 HoursThis instructor-led, live training in Slovakia (online or onsite) is aimed at intermediate-level to advanced-level AI professionals and software developers who wish to build intelligent applications using AI and ML.
By the end of this training, participants will be able to:
- Understand the advanced concepts and technologies behind AI and ML.
- Analyze and visualize data to inform AI/ML model development.
- Build, train, and deploy AI/ML models effectively.
- Create intelligent applications that can solve real-world problems.
- Evaluate the ethical implications of AI applications in various industries.
Introduction to CANN for AI Framework Developers
7 HoursCANN (Compute Architecture for Neural Networks) is Huawei’s AI computing toolkit designed for compiling, optimizing, and deploying AI models on Ascend AI processors.
This instructor-led, live training (available online or onsite) is targeted at beginner-level AI developers who want to understand how CANN integrates into the model lifecycle from training to deployment, and how it works with frameworks such as MindSpore, TensorFlow, and PyTorch.
By the end of this training, participants will be able to:
- Grasp the purpose and architecture of the CANN toolkit.
- Set up a development environment using CANN and MindSpore.
- Convert and deploy a simple AI model to Ascend hardware.
- Acquire foundational knowledge for future CANN optimization or integration projects.
Format of the Course
- Interactive lecture and discussion.
- Hands-on labs focusing on simple model deployment.
- Step-by-step walkthrough of the CANN toolchain and integration points.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
CANN for Edge AI Deployment
14 HoursHuawei's Ascend CANN toolkit facilitates robust AI inference on edge devices like the Ascend 310. CANN offers essential tools for compiling, optimizing, and deploying models in environments with limited compute and memory resources.
This instructor-led, live training (available online or onsite) is designed for intermediate-level AI developers and integrators who want to deploy and optimize models on Ascend edge devices using the CANN toolchain.
By the end of this training, participants will be able to:
- Prepare and convert AI models for the Ascend 310 using CANN tools.
- Develop lightweight inference pipelines using MindSpore Lite and AscendCL.
- Enhance model performance in environments with constrained compute and memory resources.
- Deploy and monitor AI applications in real-world edge scenarios.
Format of the Course
- Interactive lecture and demonstration.
- Hands-on lab work with models and scenarios specific to edge devices.
- Live deployment examples on virtual or physical edge hardware.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Understanding Huawei’s AI Compute Stack: From CANN to MindSpore
14 HoursHuawei’s AI stack, ranging from the low-level CANN SDK to the high-level MindSpore framework, provides a seamlessly integrated environment for AI development and deployment, optimized specifically for Ascend hardware.
This instructor-led, live training (available both online and onsite) is designed for technical professionals at beginner to intermediate levels who are interested in understanding how the CANN and MindSpore components work together to support AI lifecycle management and infrastructure decisions.
By the end of this training, participants will be able to:
- Grasp the layered architecture of Huawei’s AI compute stack.
- Recognize how CANN facilitates model optimization and hardware-level deployment.
- Assess the MindSpore framework and its toolchain in comparison to industry alternatives.
- Integrate Huawei's AI stack into enterprise or cloud/on-prem environments effectively.
Format of the Course
- Interactive lectures and discussions.
- Live system demonstrations and case-based walkthroughs.
- Optional guided labs on the model flow from MindSpore to CANN.
Course Customization Options
- For a customized training session tailored to your specific needs, please contact us to arrange.
Optimizing Neural Network Performance with CANN SDK
14 HoursCANN SDK (Compute Architecture for Neural Networks) is Huawei’s AI computation foundation that enables developers to fine-tune and optimize the performance of deployed neural networks on Ascend AI processors.
This instructor-led, live training (online or onsite) is designed for advanced-level AI developers and system engineers who wish to enhance inference performance using CANN’s advanced toolset, including the Graph Engine, TIK, and custom operator development.
By the end of this training, participants will be able to:
- Comprehend CANN's runtime architecture and performance lifecycle.
- Leverage profiling tools and the Graph Engine for performance analysis and optimization.
- Develop and optimize custom operators using TIK and TVM.
- Address memory bottlenecks and improve model throughput.
Format of the Course
- Interactive lectures and discussions.
- Practical labs with real-time profiling and operator tuning.
- Optimization exercises using edge-case deployment scenarios.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
CANN SDK for Computer Vision and NLP Pipelines
14 HoursThe CANN SDK (Compute Architecture for Neural Networks) offers robust deployment and optimization tools for real-time AI applications in computer vision and natural language processing, particularly on Huawei Ascend hardware.
This instructor-led, live training (online or onsite) is designed for intermediate-level AI practitioners who want to build, deploy, and optimize vision and language models using the CANN SDK for production scenarios.
By the end of this training, participants will be able to:
- Deploy and optimize computer vision and natural language processing models using CANN and AscendCL.
- Utilize CANN tools to convert models and integrate them into live pipelines.
- Enhance inference performance for tasks such as detection, classification, and sentiment analysis.
- Develop real-time computer vision and natural language processing pipelines for edge or cloud-based deployment scenarios.
Format of the Course
- Interactive lecture and demonstration.
- Hands-on lab with model deployment and performance profiling.
- Live pipeline design using real-world computer vision and natural language processing use cases.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Building Custom AI Operators with CANN TIK and TVM
14 HoursCANN TIK (Tensor Instruction Kernel) and Apache TVM facilitate advanced optimization and customization of AI model operators for Huawei Ascend hardware.
This instructor-led, live training (available online or on-site) is designed for advanced-level system developers who wish to build, deploy, and fine-tune custom operators for AI models using CANN’s TIK programming model and TVM compiler integration.
By the end of this training, participants will be able to:
- Write and test custom AI operators using the TIK DSL for Ascend processors.
- Integrate custom operations into the CANN runtime and execution graph.
- Utilize TVM for operator scheduling, auto-tuning, and benchmarking.
- Debug and optimize instruction-level performance for custom computation patterns.
Format of the Course
- Interactive lecture and demonstration.
- Hands-on coding of operators using TIK and TVM pipelines.
- Testing and tuning on Ascend hardware or simulators.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Migrating CUDA Applications to Chinese GPU Architectures
21 HoursChinese GPU architectures such as Huawei Ascend, Biren, and Cambricon MLUs provide CUDA alternatives specifically designed for the local AI and HPC markets in China.
This instructor-led, live training (available online or on-site) is targeted at advanced-level GPU programmers and infrastructure specialists who want to migrate and optimize their existing CUDA applications for deployment on Chinese hardware platforms.
By the end of this training, participants will be able to:
- Evaluate the compatibility of their current CUDA workloads with Chinese chip alternatives.
- Translate CUDA codebases to Huawei CANN, Biren SDK, and Cambricon BANGPy environments.
- Compare performance metrics and identify optimization opportunities across different platforms.
- Tackle practical challenges related to cross-architecture support and deployment.
Format of the Course
- Interactive lectures and discussions.
- Hands-on code translation and performance comparison labs.
- Guided exercises focused on multi-GPU adaptation strategies.
Course Customization Options
- To request a customized training for this course based on your specific platform or CUDA project, please contact us to arrange.
Intelligent Applications Fundamentals
14 HoursThis instructor-led, live training in Slovakia (online or onsite) is aimed at beginner-level IT professionals who wish to gain a foundational understanding of intelligent applications and how they can be applied in various industries.
By the end of this training, participants will be able to:
- Understand the history, principles, and impact of artificial intelligence.
- Identify and apply different machine learning algorithms.
- Manage and analyze data effectively for AI applications.
- Recognize the practical applications and limitations of AI in different sectors.
- Discuss the ethical considerations and societal implications of AI technology.
Intelligent Applications Advanced
21 HoursThis instructor-led, live training in Slovakia (online or onsite) is aimed at intermediate-level to advanced-level data scientists, engineers, and AI practitioners who wish to master the intricacies of intelligent applications and leverage them to solve complex, real-world problems.
By the end of this training, participants will be able to:
- Implement and analyze deep learning architectures.
- Apply machine learning at scale in a distributed computing environment.
- Design and execute reinforcement learning models for decision-making.
- Develop sophisticated NLP systems for language understanding.
- Utilize computer vision techniques for image and video analysis.
- Address ethical considerations in the development and deployment of AI systems.
Performance Optimization on Ascend, Biren, and Cambricon
21 HoursAscend, Biren, and Cambricon are prominent AI hardware platforms in China, each providing specialized acceleration and profiling tools designed for production-scale AI tasks.
This instructor-led, live training (available online or on-site) is tailored for advanced-level AI infrastructure and performance engineers who aim to optimize model inference and training workflows across various Chinese AI chip platforms.
By the end of this training, participants will be able to:
- Evaluate models on Ascend, Biren, and Cambricon platforms.
- Identify system bottlenecks and memory or compute inefficiencies.
- Implement graph-level, kernel-level, and operator-level optimizations.
- Refine deployment pipelines to enhance throughput and reduce latency.
Format of the Course
- Interactive lectures and discussions.
- Practical use of profiling and optimization tools on each platform.
- Guided exercises focused on real-world tuning scenarios.
Course Customization Options
- To request a customized training for this course based on your specific performance environment or model type, please contact us to arrange.