首页 星云 工具 资源 星选 资讯 热门工具
:

PDF转图片 完全免费 小红书视频下载 无水印 抖音视频下载 无水印 数字星空

大模型微调入门 LLM-quickstart-main

人工智能 5.95MB 6 需要积分: 1
立即下载

资源介绍:

大模型微调入门
## Translation This directory contains examples for finetuning and evaluating transformers on translation tasks. Please tag @patil-suraj with any issues/unexpected behaviors, or send a PR! For deprecated `bertabs` instructions, see [`bertabs/README.md`](https://github.com/huggingface/transformers/blob/main/examples/research_projects/bertabs/README.md). For the old `finetune_trainer.py` and related utils, see [`examples/legacy/seq2seq`](https://github.com/huggingface/transformers/blob/main/examples/legacy/seq2seq). ### Supported Architectures - `BartForConditionalGeneration` - `FSMTForConditionalGeneration` (translation only) - `MBartForConditionalGeneration` - `MarianMTModel` - `PegasusForConditionalGeneration` - `T5ForConditionalGeneration` - `MT5ForConditionalGeneration` `run_translation.py` is a lightweight examples of how to download and preprocess a dataset from the [🤗 Datasets](https://github.com/huggingface/datasets) library or use your own files (jsonlines or csv), then fine-tune one of the architectures above on it. For custom datasets in `jsonlines` format please see: https://huggingface.co/docs/datasets/loading_datasets#json-files and you also will find examples of these below. ## With Trainer Here is an example of a translation fine-tuning with a MarianMT model: ```bash python examples/pytorch/translation/run_translation.py \ --model_name_or_path Helsinki-NLP/opus-mt-en-ro \ --do_train \ --do_eval \ --source_lang en \ --target_lang ro \ --dataset_name wmt16 \ --dataset_config_name ro-en \ --output_dir /tmp/tst-translation \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` MBart and some T5 models require special handling. T5 models `t5-small`, `t5-base`, `t5-large`, `t5-3b` and `t5-11b` must use an additional argument: `--source_prefix "translate {source_lang} to {target_lang}"`. For example: ```bash python examples/pytorch/translation/run_translation.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --source_lang en \ --target_lang ro \ --source_prefix "translate English to Romanian: " \ --dataset_name wmt16 \ --dataset_config_name ro-en \ --output_dir /tmp/tst-translation \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` If you get a terrible BLEU score, make sure that you didn't forget to use the `--source_prefix` argument. For the aforementioned group of T5 models it's important to remember that if you switch to a different language pair, make sure to adjust the source and target values in all 3 language-specific command line argument: `--source_lang`, `--target_lang` and `--source_prefix`. MBart models require a different format for `--source_lang` and `--target_lang` values, e.g. instead of `en` it expects `en_XX`, for `ro` it expects `ro_RO`. The full MBart specification for language codes can be found [here](https://huggingface.co/facebook/mbart-large-cc25). For example: ```bash python examples/pytorch/translation/run_translation.py \ --model_name_or_path facebook/mbart-large-en-ro \ --do_train \ --do_eval \ --dataset_name wmt16 \ --dataset_config_name ro-en \ --source_lang en_XX \ --target_lang ro_RO \ --output_dir /tmp/tst-translation \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` And here is how you would use the translation finetuning on your own files, after adjusting the values for the arguments `--train_file`, `--validation_file` to match your setup: ```bash python examples/pytorch/translation/run_translation.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --source_lang en \ --target_lang ro \ --source_prefix "translate English to Romanian: " \ --dataset_name wmt16 \ --dataset_config_name ro-en \ --train_file path_to_jsonlines_file \ --validation_file path_to_jsonlines_file \ --output_dir /tmp/tst-translation \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` The task of translation supports only custom JSONLINES files, with each line being a dictionary with a key `"translation"` and its value another dictionary whose keys is the language pair. For example: ```json { "translation": { "en": "Others have dismissed him as a joke.", "ro": "Alții l-au numit o glumă." } } { "translation": { "en": "And some are holding out for an implosion.", "ro": "Iar alții așteaptă implozia." } } ``` Here the languages are Romanian (`ro`) and English (`en`). If you want to use a pre-processed dataset that leads to high BLEU scores, but for the `en-de` language pair, you can use `--dataset_name stas/wmt14-en-de-pre-processed`, as following: ```bash python examples/pytorch/translation/run_translation.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --source_lang en \ --target_lang de \ --source_prefix "translate English to German: " \ --dataset_name stas/wmt14-en-de-pre-processed \ --output_dir /tmp/tst-translation \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` ## With Accelerate Based on the script [`run_translation_no_trainer.py`](https://github.com/huggingface/transformers/blob/main/examples/pytorch/translation/run_translation_no_trainer.py). Like `run_translation.py`, this script allows you to fine-tune any of the models supported on a translation task, the main difference is that this script exposes the bare training loop, to allow you to quickly experiment and add any customization you would like. It offers less options than the script with `Trainer` (for instance you can easily change the options for the optimizer or the dataloaders directly in the script) but still run in a distributed setup, on TPU and supports mixed precision by the mean of the [🤗 `Accelerate`](https://github.com/huggingface/accelerate) library. You can use the script normally after installing it: ```bash pip install git+https://github.com/huggingface/accelerate ``` then ```bash python run_translation_no_trainer.py \ --model_name_or_path Helsinki-NLP/opus-mt-en-ro \ --source_lang en \ --target_lang ro \ --dataset_name wmt16 \ --dataset_config_name ro-en \ --output_dir ~/tmp/tst-translation ``` You can then use your usual launchers to run in it in a distributed environment, but the easiest way is to run ```bash accelerate config ``` and reply to the questions asked. Then ```bash accelerate test ``` that will check everything is ready for training. Finally, you can launch training with ```bash accelerate launch run_translation_no_trainer.py \ --model_name_or_path Helsinki-NLP/opus-mt-en-ro \ --source_lang en \ --target_lang ro \ --dataset_name wmt16 \ --dataset_config_name ro-en \ --output_dir ~/tmp/tst-translation ``` This command is the same and will work for: - a CPU-only setup - a setup with one GPU - a distributed training with several GPUs (single or multi node) - a training on TPUs Note that this library is in alpha release so your feedb

资源文件列表:

LLM-quickstart-main.zip 大约有105个文件
  1. LLM-quickstart-main/
  2. LLM-quickstart-main/.gitignore 3.08KB
  3. LLM-quickstart-main/LICENSE 11.09KB
  4. LLM-quickstart-main/README-en.md 6.32KB
  5. LLM-quickstart-main/README.md 6.22KB
  6. LLM-quickstart-main/chatglm/
  7. LLM-quickstart-main/chatglm/chatbot_webui.py 1.07KB
  8. LLM-quickstart-main/chatglm/chatbot_with_memory.ipynb 11.29KB
  9. LLM-quickstart-main/chatglm/chatglm_inference.ipynb 34.65KB
  10. LLM-quickstart-main/chatglm/data/
  11. LLM-quickstart-main/chatglm/data/raw_data.txt 18.9KB
  12. LLM-quickstart-main/chatglm/data/zhouyi_dataset_20240118_152413.csv 213.75KB
  13. LLM-quickstart-main/chatglm/data/zhouyi_dataset_20240118_163659.csv 147.08KB
  14. LLM-quickstart-main/chatglm/data/zhouyi_dataset_handmade.csv 7.53KB
  15. LLM-quickstart-main/chatglm/gen_dataset.ipynb 73.01KB
  16. LLM-quickstart-main/chatglm/qlora_chatglm3.ipynb 39.45KB
  17. LLM-quickstart-main/chatglm/qlora_chatglm3_timestamp.ipynb 36.9KB
  18. LLM-quickstart-main/deepspeed/
  19. LLM-quickstart-main/deepspeed/README.md 1.74KB
  20. LLM-quickstart-main/deepspeed/config/
  21. LLM-quickstart-main/deepspeed/config/ds_config_zero2.json 1.2KB
  22. LLM-quickstart-main/deepspeed/config/ds_config_zero3.json 1.46KB
  23. LLM-quickstart-main/deepspeed/train_on_multi_nodes.sh 2.12KB
  24. LLM-quickstart-main/deepspeed/train_on_one_gpu.sh 1.81KB
  25. LLM-quickstart-main/deepspeed/translation/
  26. LLM-quickstart-main/deepspeed/translation/README.md 7.88KB
  27. LLM-quickstart-main/deepspeed/translation/requirements.txt 119B
  28. LLM-quickstart-main/deepspeed/translation/run_translation.py 29.58KB
  29. LLM-quickstart-main/docs/
  30. LLM-quickstart-main/docs/INSTALL.md 4.46KB
  31. LLM-quickstart-main/docs/cuda_installation.png 136.28KB
  32. LLM-quickstart-main/docs/version_check.py 966B
  33. LLM-quickstart-main/docs/version_info.txt 382B
  34. LLM-quickstart-main/langchain/
  35. LLM-quickstart-main/langchain/chains/
  36. LLM-quickstart-main/langchain/chains/router_chain.ipynb 16.45KB
  37. LLM-quickstart-main/langchain/chains/sequential_chain.ipynb 22.3KB
  38. LLM-quickstart-main/langchain/chains/transform_chain.ipynb 317.34KB
  39. LLM-quickstart-main/langchain/data_connection/
  40. LLM-quickstart-main/langchain/data_connection/document_loader.ipynb 63.45KB
  41. LLM-quickstart-main/langchain/data_connection/document_transformer.ipynb 60.63KB
  42. LLM-quickstart-main/langchain/data_connection/text_embedding.ipynb 7.18KB
  43. LLM-quickstart-main/langchain/data_connection/vector_stores.ipynb 76.04KB
  44. LLM-quickstart-main/langchain/images/
  45. LLM-quickstart-main/langchain/images/llm_chain.png 1.94MB
  46. LLM-quickstart-main/langchain/images/memory.png 110.59KB
  47. LLM-quickstart-main/langchain/images/model_io.jpeg 643.33KB
  48. LLM-quickstart-main/langchain/images/router_chain.png 524.24KB
  49. LLM-quickstart-main/langchain/images/sequential_chain_0.png 502.17KB
  50. LLM-quickstart-main/langchain/images/simple_sequential_chain_0.png 479.32KB
  51. LLM-quickstart-main/langchain/images/simple_sequential_chain_1.png 616.04KB
  52. LLM-quickstart-main/langchain/images/transform_chain.png 498.14KB
  53. LLM-quickstart-main/langchain/memory/
  54. LLM-quickstart-main/langchain/memory/memory.ipynb 24.31KB
  55. LLM-quickstart-main/langchain/model_io/
  56. LLM-quickstart-main/langchain/model_io/model.ipynb 35.87KB
  57. LLM-quickstart-main/langchain/model_io/output_parser.ipynb 15.11KB
  58. LLM-quickstart-main/langchain/model_io/prompt.ipynb 60.29KB
  59. LLM-quickstart-main/langchain/tests/
  60. LLM-quickstart-main/langchain/tests/state_of_the_union.txt 38.11KB
  61. LLM-quickstart-main/langchain/tests/the_old_man_and_the_sea.txt 137.4KB
  62. LLM-quickstart-main/llama/
  63. LLM-quickstart-main/llama/llama2_inference.ipynb 5.37KB
  64. LLM-quickstart-main/llama/llama2_instruction_tuning.ipynb 22.68KB
  65. LLM-quickstart-main/peft/
  66. LLM-quickstart-main/peft/chatglm3.ipynb 23.52KB
  67. LLM-quickstart-main/peft/data/
  68. LLM-quickstart-main/peft/data/audio/
  69. LLM-quickstart-main/peft/data/audio/test_zh.flac 788.95KB
  70. LLM-quickstart-main/peft/peft_chatglm_inference.ipynb 6.86KB
  71. LLM-quickstart-main/peft/peft_lora_opt-6.7b.ipynb 268.51KB
  72. LLM-quickstart-main/peft/peft_lora_whisper-large-v2.ipynb 40.85KB
  73. LLM-quickstart-main/peft/peft_qlora_chatglm.ipynb 41.27KB
  74. LLM-quickstart-main/peft/whisper_eval.ipynb 23.18KB
  75. LLM-quickstart-main/quantization/
  76. LLM-quickstart-main/quantization/AWQ-opt-125m.ipynb 13.69KB
  77. LLM-quickstart-main/quantization/AWQ_opt-2.7b.ipynb 11.63KB
  78. LLM-quickstart-main/quantization/AutoGPTQ_opt-2.7b.ipynb 600.78KB
  79. LLM-quickstart-main/quantization/bits_and_bytes.ipynb 11.7KB
  80. LLM-quickstart-main/quantization/docs/
  81. LLM-quickstart-main/quantization/docs/images/
  82. LLM-quickstart-main/quantization/docs/images/qlora.png 140.99KB
  83. LLM-quickstart-main/requirements.txt 430B
  84. LLM-quickstart-main/transformers/
  85. LLM-quickstart-main/transformers/data/
  86. LLM-quickstart-main/transformers/data/audio/
  87. LLM-quickstart-main/transformers/data/audio/mlk.flac 374.46KB
  88. LLM-quickstart-main/transformers/data/image/
  89. LLM-quickstart-main/transformers/data/image/cat-chonk.jpeg 54.99KB
  90. LLM-quickstart-main/transformers/data/image/cat_dog.jpg 68.63KB
  91. LLM-quickstart-main/transformers/data/image/panda.jpg 600.53KB
  92. LLM-quickstart-main/transformers/docs/
  93. LLM-quickstart-main/transformers/docs/images/
  94. LLM-quickstart-main/transformers/docs/images/bert-base-chinese.png 47.29KB
  95. LLM-quickstart-main/transformers/docs/images/bert.png 222.39KB
  96. LLM-quickstart-main/transformers/docs/images/bert_pretrain.png 111.51KB
  97. LLM-quickstart-main/transformers/docs/images/full_nlp_pipeline.png 96.02KB
  98. LLM-quickstart-main/transformers/docs/images/gpt2.png 42.6KB
  99. LLM-quickstart-main/transformers/docs/images/pipeline_advanced.png 96.02KB
  100. LLM-quickstart-main/transformers/docs/images/pipeline_func.png 51.51KB
  101. LLM-quickstart-main/transformers/docs/images/question_answering.png 52.74KB
  102. LLM-quickstart-main/transformers/fine-tune-QA.ipynb 87.73KB
  103. LLM-quickstart-main/transformers/fine-tune-quickstart.ipynb 40.88KB
  104. LLM-quickstart-main/transformers/pipelines.ipynb 50.68KB
  105. LLM-quickstart-main/transformers/pipelines_advanced.ipynb 27.67KB
0评论
提交 加载更多评论
其他资源 deepseek-engineer-main
deepseek-engineer-main
新能源锂电池 欧姆龙梯形图程序模板 欧姆龙程序 包膜机程序 包蓝膜机程序,某新能源乙方大厂kr程序模板,程序标准化 plc程序,触摸屏,电气接线图,易损件bom清单 程序密码 库均已解开(可看
新能源锂电池 欧姆龙梯形图程序模板 欧姆龙程序 包膜机程序 包蓝膜机程序,某新能源乙方大厂kr程序模板,程序标准化。 plc程序,触摸屏,电气接线图,易损件bom清单 程序密码 库均已解开(可看程序源码) 自动流程 初始化流程 手自动安全条件 型 配方保存加载 程序和触摸屏来自真实产线,配套使用
新能源锂电池 欧姆龙梯形图程序模板 欧姆龙程序 包膜机程序
包蓝膜机程序,某新能源乙方大厂kr程序模板,程序标准化 
plc程序,触摸屏,电气接线图,易损件bom清单
程序密码 库均已解开(可看
comsol多孔介质流固耦合案例,孔压、位移时空演化特征
comsol多孔介质流固耦合案例,孔压、位移时空演化特征。
comsol多孔介质流固耦合案例,孔压、位移时空演化特征
fluent激光熔覆案例#增材制造,流体仿真 质量源
fluent激光熔覆案例#增材制造,流体仿真。 质量源
fluent激光熔覆案例#增材制造,流体仿真 
质量源
自动语音翻译 seamless
seamless
双馈风力发电系统的建模与仿真(含模型和实验报告)
双馈风力发电系统的建模与仿真(含模型和实验报告)
双馈风力发电系统的建模与仿真(含模型和实验报告)
基于两轮差速移动机器人的模型预测控制(mpc)轨迹跟踪(simulnk模型加matlab代码,无联合仿真,横纵向跟踪) ,最新 1.轮式移动机器人(WMR,wheeled mobile robot)
基于两轮差速移动机器人的模型预测控制(mpc)轨迹跟踪(simulnk模型加matlab代码,无联合仿真,横纵向跟踪) ,最新 1.轮式移动机器人(WMR,wheeled mobile robot) 基于两轮差速移动机器人的模型预测控制轨迹跟踪,既可以实现车速的跟踪,又可以实现对路径的跟踪; 2.采用simulnk搭建模型主体,matlab代码搭建MPC控制器,无联合仿真 3.设置了5种轨迹,包括三种车速的圆形轨迹,单车速的直线轨迹,单车速的双移线轨迹,仿真效果如图。 4.包含绘制对比分析图片的代码,可一键绘制轨迹对北比图 5.为了使控制量输出平稳,MPCc控制器采用控制增量建立 6.代码规范,重点部分有注释 7.,有参考lunwen
基于两轮差速移动机器人的模型预测控制(mpc)轨迹跟踪(simulnk模型加matlab代码,无联合仿真,横纵向跟踪) ,最新
1.轮式移动机器人(WMR,wheeled mobile robot)
永磁同步电机非线性磁链观测器估计转子位置 1、非线性磁链观测器具有支持低速甚至零速启动或运行,稳定性强,收敛性快的特点 2、附带参考文献 3、送非线性磁链观测器代码
永磁同步电机非线性磁链观测器估计转子位置 1、非线性磁链观测器具有支持低速甚至零速启动或运行,稳定性强,收敛性快的特点 2、附带参考文献 3、送非线性磁链观测器代码
永磁同步电机非线性磁链观测器估计转子位置
1、非线性磁链观测器具有支持低速甚至零速启动或运行,稳定性强,收敛性快的特点
2、附带参考文献
3、送非线性磁链观测器代码