自学内容网 自学内容网

TinyBERT: Distilling BERT for Natural Language Understanding (EMNLP 2020)

论文地址:https://arxiv.org/pdf/1909.10351.pdf?trk=public_post_comment-text

代码地址:Pretrained-Language-Model/TinyBERT at master · huawei-noah/Pretrained-Language-Model · GitHub

0、蒸馏transformer模型的推荐文献(以Bert为例)

SKDBert(AAAI-2023)(多教师蒸馏+随机采样分布)✖️

Tinybert (EMNLP 2020) (指定跨层蒸馏)√

MobileBERT(ACL 2020)(体积小)√

PKDBert (2019) 更浅,hidden states 在多个中间层上的知识传输,教师模型经过任务优化微调

DistilBert(NeurIPS 2019) 深度减半

MiniLM (NeurIPS 2020)

SqueezeBERT(2020)(多层分组卷积)

Internal KD(AAAI 2020) (指定跨层蒸馏)

一、Tinybert

Tinybert (EMNL 2020) (指定跨层蒸馏) √(embedding层,attn和mlp分别做mse loss)

Tinybert > DistilBert,PKDBert | MobileBERT(24层)

跨层方法:

-> TinyBERT6 g(m) = 2 × m

-> TinyBERT4 g(m) = 3 × m

蒸馏目标:L attnL hidnL embd:MSE loss;L pred,CE loss

蒸馏阶段:two-stage learning -> pre-training-then-fine-tuning

GD (General Distillation) √

TD (Task-specific Distillation)

DA (Data Augmentation)

消融实验结果


原文地址:https://blog.csdn.net/zmc1248234377/article/details/142761289

免责声明:本站文章内容转载自网络资源,如本站内容侵犯了原著者的合法权益,可联系本站删除。更多内容请关注自学内容网(zxcms.com)!