Skip to content

VectorInstitute/finetuning-and-alignment

Repository files navigation

Fine-Tuning and Alignment Bootcamp

About Fine-Tuning and Alignment Bootcamp

In this bootcamp, we explore advanced Fine-Tuning techniques and approaches to enhance Large Language Model performance, reduce their computational cost, with a focus on alignment with human values.

Repository Structure

  • data/: Includes sample datasets or links to datasets used in the bootcamp, along with usage instructions. It also contains the implementation of dataset modules, or anything related to that.
  • reference_implementations/: Reference Implementations are organized by topics. Each topic has its own directory containing codes, notebooks, and a README for guidance.
  • utils/: It contains replicated codes or utility scripts that is used for the reference implementations.
  • pyproject.toml: The pyproject.toml file in this repository configures various build system requirements and dependencies, centralizing project settings in a standardized format.

Reference Implementations

  • Distributed Training Demos: Scripts demonstrating distributed training methods.
  • Instruction Tuning Demo: Jupyter notebook for instruction tuning techniques.
  • PEFT: Custom modules and notebooks demonstrating Parameter Efficient Fine Tuning (PEFT) techniques, with interactive sections for participant completion.
  • Quantization: Notebook demonstrating model quantization techniques.
  • Supervised Fine-Tuning: Notebook demonstrating supervised fine-tuning methods, with interactive sections for participant completion.

Getting Started

To get started with this bootcamp:

  1. Clone this repository to your machine.
git clone https://github.com/VectorInstitute/finetuning-and-alignment.git
  1. Begin with each topic in the reference_implementations/ directory. All reference implementations, except for the distributed training demo, have a corresponding notebook. Follow the instructions in the onboarding sessions to run these notebooks. For the distributed training demo, follow the instructions in its own README file.

License

This project is licensed under the terms of the LICENSE.md file.

Contribution

To get started with contributing to this project, please read our CONTRIBUTING.md guide.

Contact Information

  • For more information on the Finetuning and Alignment Bootcamp, please contact the industry team at [email protected].
  • For help with navigating this repository, please contact the AI engineering Team at [email protected].

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published