Get in Touch

Course Outline

Introduction to Parameter-Efficient Fine-Tuning (PEFT)

  • Understanding the motivation and limitations of full fine-tuning
  • Overview of PEFT: objectives and advantages
  • Industry applications and use cases

LoRA (Low-Rank Adaptation)

  • Core concepts and intuitive understanding of LoRA
  • Implementing LoRA with Hugging Face and PyTorch
  • Practical session: Fine-tuning a model using LoRA

Adapter Tuning

  • Mechanics of adapter modules
  • Integration with transformer-based architectures
  • Practical session: Applying Adapter Tuning to a transformer model

Prefix Tuning

  • Leveraging soft prompts for fine-tuning
  • Advantages and limitations relative to LoRA and adapters
  • Practical session: Executing Prefix Tuning on an LLM task

Evaluating and Comparing PEFT Methods

  • Key metrics for assessing performance and efficiency
  • Trade-offs regarding training speed, memory consumption, and accuracy
  • Benchmarking experiments and interpreting results

Deploying Fine-Tuned Models

  • Techniques for saving and loading fine-tuned models
  • Considerations for deploying PEFT-based models
  • Integration into applications and data pipelines

Best Practices and Extensions

  • Combining PEFT with quantization and distillation techniques
  • Application in low-resource and multilingual environments
  • Emerging trends and active research areas

Summary and Next Steps

Requirements

  • Foundational knowledge of machine learning concepts
  • Practical experience with large language models (LLMs)
  • Proficiency in Python and PyTorch

Target Audience

  • Data Scientists
  • AI Engineers
 14 Hours

Number of participants


Price per participant

Upcoming Courses

Related Categories