Get in Touch

Course Outline

Performance Concepts and Metrics

  • Latency, throughput, power consumption, and resource utilization
  • Distinguishing system-level versus model-level bottlenecks
  • Profiling strategies for inference compared to training

Profiling on Huawei Ascend

  • Utilizing CANN Profiler and MindInsight
  • Diagnosing kernels and operators
  • Understanding offload patterns and memory mapping

Profiling on Biren GPU

  • Exploring performance monitoring features within the Biren SDK
  • Kernel fusion, memory alignment, and execution queues
  • Profiling with attention to power and temperature

Profiling on Cambricon MLU

  • Employing BANGPy and Neuware performance tools
  • Gaining kernel-level visibility and interpreting logs
  • Integrating the MLU profiler with deployment frameworks

Graph and Model-Level Optimization

  • Strategies for graph pruning and quantization
  • Operator fusion and restructuring computational graphs
  • Standardizing input sizes and tuning batches

Memory and Kernel Optimization

  • Optimizing memory layout and reuse mechanisms
  • Managing buffers efficiently across different chipsets
  • Applying platform-specific kernel tuning techniques

Cross-Platform Best Practices

  • Achieving performance portability through abstraction strategies
  • Developing shared tuning pipelines for multi-chip environments
  • Example: Tuning an object detection model across Ascend, Biren, and MLU

Summary and Next Steps

Requirements

  • Experience in AI model training or deployment pipelines
  • Familiarity with GPU/MLU compute principles and model optimization techniques
  • Basic proficiency with performance profiling tools and metrics

Audience

  • Performance engineers
  • Machine learning infrastructure teams
  • AI system architects
 21 Hours

Number of participants


Price per participant

Upcoming Courses

Related Categories