自学内容网 自学内容网

ValueError: You cannot perform fine-tuning on purely quantized models.

在使用peft 微调8bit 或者4bit 模型的时候,可能会报错:

You cannot perform fine-tuning on purely quantized models. Please attach trainable adapters on top of the quantized model to correctly perform fine-tuning. Please see: https://huggingface.co/docs/transformers/peft for more details"

查看trainer.py 代码

# At this stage the model is already loaded
if _is_quantized_and_base_model and not _is_peft_model(model):
    raise ValueError(
        "You cannot perform fine-tuning on purely quantized models. Please attach trainable adapters on top of"
        " the quantized model to correctly perform fine-tuning. Please see: https://huggingface.co/docs/transformers/peft"
        " for more details"
    )

_is_quantized_and_base_model检查是否是量化模型

 _is_quantized_and_base_model = getattr(model, "is_quantized", False) and not getattr(
     model, "_hf_peft_config_loaded", False
 )

_is_peft_model检查是否是PeftModel或者PeftMixedModel

def _is_peft_model(model):
    if is_peft_available():
        classes_to_check = (PeftModel,) if is_peft_available() else ()
        # Here we also check if the model is an instance of `PeftMixedModel` introduced in peft>=0.7.0: https://github.com/huggingface/transformers/pull/28321
        if version.parse(importlib.metadata.version("peft")) >= version.parse("0.7.0"):
            from peft import PeftMixedModel

            classes_to_check = (*classes_to_check, PeftMixedModel)
        return isinstance(model, classes_to_check)
    return False

DEBUG:

1.加载模型的时候已经设置了量化参数,确定是量化模型;
2.使用了model = get_peft_model(model, config),为什么不是PeftModel,那model是什么类型呢?

<class 'src.peft.peft_model.PeftModelForCausalLM'>

这是因为在测试peft代码的时候,设置了使用本地peft代码,而不是安装的peft库,就导致类型出现了错误。

from src.peft import LoraConfig, TaskType, get_peft_model
改为
from peft import LoraConfig, TaskType, get_peft_model

修改后就可以训练了。


原文地址:https://blog.csdn.net/zhilaizhiwang/article/details/142703905

免责声明:本站文章内容转载自网络资源,如本站内容侵犯了原著者的合法权益,可联系本站删除。更多内容请关注自学内容网(zxcms.com)!