华为昇腾910B1基于 LoRA 的 Qwen2.5-7B-Instruct 模型微调
系统环境
Ascend-hdk-910b-npu-driver_24.1.rc3_linux-aarch64.run
Ascend-hdk-910b-npu-firmware_7.5.0.1.129.run
Ascend-cann-toolkit_8.0.RC3.alpha003_linux-aarch64.run
Ascend-cann-kernels-910b_8.0.RC3.alpha003_linux-aarch64.run
虚拟环境
默认已安装conda,git
git clone https://github.com/hiyouga/LLaMA-Factory.git
cd LLaMA-Factory/
conda create -y -n llamafactory python=3.10
conda activate llamafactory
pip install -e ".[torch-npu,metrics]" -i https://pypi.tuna.tsinghua.edu.cn/simple
pip install -e ".[deepspeed,modelscope]" -i https://pypi.tuna.tsinghua.edu.cn/simple
llamafactory-cli env
微调模型
export USE_MODELSCOPE_HUB=1
ASCEND_RT_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 llamafactory-cli train examples/train_lora/qwen2__5_lora_sft.yaml
yaml文件
### model
model_name_or_path: qwen/Qwen2.5-7B-Instruct
trust_remote_code: true
### method
stage: sft
do_train: true
finetuning_type: lora
lora_target: q_proj,v_proj
### ddp
ddp_timeout: 180000000
deepspeed: examples/deepspeed/ds_z0_config.json
### dataset
dataset: alpaca_zh_demo
template: qwen
cutoff_len: 2048
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
### output
output_dir: saves/Qwen2.5-7B-Instruct/lora/sft
logging_steps: 10
save_steps: 1000
plot_loss: true
overwrite_output_dir: true
### train
per_device_train_batch_size: 1
gradient_accumulation_steps: 8
learning_rate: 0.0001
num_train_epochs: 120
lr_scheduler_type: cosine
bf16: true
### eval
val_size: 0.1
per_device_eval_batch_size: 1
evaluation_strategy: steps
eval_steps: 500
training_loss
training_eval_loss
原文地址:https://blog.csdn.net/weixin_46398647/article/details/145173058
免责声明:本站文章内容转载自网络资源,如本站内容侵犯了原著者的合法权益,可联系本站删除。更多内容请关注自学内容网(zxcms.com)!