Skip to content

Commit

Permalink
Merge pull request #69 from FederatedAI/feature-2.0.0-update_doc
Browse files Browse the repository at this point in the history
update readme
  • Loading branch information
mgqa34 authored Mar 6, 2024
2 parents c7a3822 + c53749b commit fea0580
Show file tree
Hide file tree
Showing 2 changed files with 4 additions and 4 deletions.
5 changes: 3 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,13 +17,14 @@ FATE-LLM is a framework to support federated learning for large language models(

### Standalone deployment
Please refer to [FATE-Standalone deployment](https://github.com/FederatedAI/FATE#standalone-deployment).
Deploy FATE-Standalone version with 1.11.3 <= version < 2.0, then copy directory `python/fate_llm` to `{fate_install}/fate/python/fate_llm`
* To deploy FATE-LLM v2.0, deploy FATE-Standalone with version >= 2.1, then make a new directory `{fate_install}/fate_llm` and clone the code into it, install the python requirements, and add `{fate_install}/fate_llm/python` to `PYTHONPATH`
* To deploy FATE-LLM v1.x, deploy FATE-Standalone with 1.11.3 <= version < 2.0, then copy directory `python/fate_llm` to `{fate_install}/fate/python/fate_llm`

### Cluster deployment
Use [FATE-LLM deployment packages](https://github.com/FederatedAI/FATE/wiki/Download#llm%E9%83%A8%E7%BD%B2%E5%8C%85) to deploy, refer to [FATE-Cluster deployment](https://github.com/FederatedAI/FATE#cluster-deployment) for more deployment details.

## Quick Start
- [Federated ChatGLM3-6B Training](./doc/tutorial/parameter_efficient_llm/ChatGLM3-6B_ds.ipynb)
- [Builtin Models In PELLM](./doc/tutorial/builtin_models.md)
- [Builtin Models In PELLM](./doc/tutorial/builtin_pellm_models.md)
- [Offsite Tuning Tutorial](./doc/tutorial/offsite_tuning/Offsite_tuning_tutorial.ipynb)
- [FedKSeed](./doc/tutorial/fedkseed/fedkseed-example.ipynb)
3 changes: 1 addition & 2 deletions doc/tutorial/builtin_pellm_models.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,12 @@ After reading the training tutorial above, it's easy to use other models listing


| Model | ModuleName | ClassName | DataSetName |
| -------------- | ----------------- | --------------| --------------- | |
| -------------- | ----------------- | --------------| --------------- |
| Qwen2 | pellm.qwen | Qwen | prompt_dataset |
| Bloom-7B1 | pellm.bloom | Bloom | prompt_dataset |
| LLaMA-2-7B | pellm.llama | LLaMa | prompt_dataset |
| LLaMA-7B | pellm.llama | LLaMa | prompt_dataset |
| ChatGLM3-6B | pellm.chatglm | ChatGLM | prompt_dataset |
| ChatGLM-6B | pellm.chatglm | ChatGLM | prompt_dataset |
| GPT-2 | pellm.gpt2 | GPT2 | seq_cls_dataset |
| ALBERT | pellm.albert | Albert | seq_cls_dataset |
| BART | pellm.bart | Bart | seq_cls_dataset |
Expand Down

0 comments on commit fea0580

Please sign in to comment.