Skip to content

Releases: FederatedAI/FATE-LLM

Release v2.2.0

02 Aug 06:59
c0ae102
Compare
Choose a tag to compare

By downloading, installing or using the software, you accept and agree to be bound by all of the terms and conditions of the LICENSE and DISCLAIMER.

Major Features and Improvements

  • Integrate the PDSS algorithm, a novel framework that enhances local small language models (SLMs) using differentially private protected Chain of Thoughts (Cot) generated by remote LLMs:
    • Implement InferDPT for privacy-preserving Cot generation.
    • Support an encoder-decoder mechanism for privacy-preserving Cot generation.
    • Add prefix trainers for step-by-step distillation and text encoder-decoder training.
  • Integrate the FDKT algorithm, a framework that enables domain-specific knowledge transfer from LLMs to SLMs while preserving SLM data privacy
  • Deployment Optimization: support installation of FATE-LLM by PyPi

Release v2.1.0

28 Jun 07:09
173c27a
Compare
Choose a tag to compare

By downloading, installing or using the software, you accept and agree to be bound by all of the terms and conditions of the LICENSE and DISCLAIMER.

Major Features and Improvements

  • Introducing FedMKT: Federated Mutual Knowledge Transfer for Large and Small Language Models.
    • Support three distinct scenarios: Heterogeneous, Homogeneous and One-to-One
    • Support LLM to SLM one-way knowledge transfer
  • Introducing InferDPT: Privacy-preserving Inference for Black-box Large Language Models. InferDPT leverages differential privacy (DP) to facilitate privacy-preserving inference for large language models.
  • Introducing FATE-LLM Evaluate: to evaluate FATE-LLM models in few lines with Python SDK or simple CLI commands, built-in cases included.

Release v2.0.0

06 Mar 07:53
abee189
Compare
Choose a tag to compare

By downloading, installing or using the software, you accept and agree to be bound by all of the terms and conditions of the LICENSE and DISCLAIMER.

Major Features and Improvements

  • Adapt to fate-v2.0 framework:
    • Migrate parameter-efficient fine-tuning training methods and models.
    • Migrate Standard Offsite-Tuning and Extended Offsite-Tuning(Federated Offsite-Tuning+)
    • Newly trainer,dataset, data_processing function design
  • New FedKSeed Federated Tuning Algorithm: train large language models in a federated learning setting with extremely low communication cost

Release v1.3.0

08 Sep 06:22
c5d1bb3
Compare
Choose a tag to compare

By downloading, installing or using the software, you accept and agree to be bound by all of the terms and conditions of the LICENSE and DISCLAIMER.

Major Features and Improvements

  • FTL-LLM(Fedrated Learning + Transfer Learning + LLM)
    • Standard Offsite-Tuning and Extended Offsite-Tuning(Federated Offsite-Tuning+)now supported
    • Framework available for Emulator and Adapter development
    • New Offsite-Tuning Trainer introduced
    • Includes built-in models such as GPT-2 family, Llama7b, and Bloom family
  • FedIPR
    • Introduced WatermarkDataset as the foundational dataset class for backdoor-based watermarks
    • Added SignConv and SignLayerNorm blocks for feature-based watermark models
    • New FedIPR Trainer available
    • Built-in models with feature-based watermarks include Alexnet, Resnet18, DistilBert, and GPT2
  • More models support parameter-efficient fine-tuning: ChatGLM2-6B and Bloom-7B1

Release v1.2.0

25 Jun 09:22
5197a6a
Compare
Choose a tag to compare

By downloading, installing or using the software, you accept and agree to be bound by all of the terms and conditions of the LICENSE and DISCLAIMER.

Major Features and Improvements

  • Support Federated Training of LLaMA-7B with parameter-efficient fine-tuning.

Release v1.1.0

31 May 15:08
342ac6b
Compare
Choose a tag to compare

By downloading, installing or using the software, you accept and agree to be bound by all of the terms and conditions of the LICENSE and DISCLAIMER.

Major Features and Improvements

  • Support Federated Training of ChatGLM-6B with parameter-efficient fine-tuning adapters: like Lora and P-Tuning V2 etc.
  • Integration of peft, which support many parameter-efficient adapters.