深度学习:昇思MindSpore生态桥接工具——MindTorch实践
目录
MindTorch入门
MindTorch是一款将PyTorch训练脚本高效迁移至MindSpore框架执行的实用工具,旨在不改变原生PyTorch用户的编程使用习惯下,使得PyTorch风格代码能在昇腾硬件上获得高效性能。用户只需要在PyTorch源代码主入口调用torch
系列相关的包导入部分(如torch、torchvision
等)之前调用from mindtorch.tools import mstorch_enable
,加上少量训练代码适配即可实现模型在昇腾硬件上的训练。
MindTorch安装
通过pip安装稳定版本
pip install mindtorch
通过源码安装(开发版本)
git clone https://git.openi.org.cn/OpenI/MSAdapter.git
cd MSAdapter
python setup.py install
快速使用
在pytorch代码执行的主文件入口,import torch前,加入语句:
from mindtorch.tools import mstorch_enable
即可实现快速迁移
MindTorch进阶
MindTorch优化器与学习率适配
1. 打印学习率差异
PyTorch代码
import torch
optimizer = torch.optim.SGD([torch.nn.Parameter(torch.tensor(2.))], lr=0.01)
print('lr is {}'.format(optimizer.param_groups[0]['lr']))
MindTorch代码
import mindtorch as torch
optimizer = torch.optim.SGD([torch.nn.Parameter(torch.tensor(2.))], lr=0.01)
print('lr is {}'.format(float(optimizer.param_groups[0]['lr'])))
需要将学习率转为Number类型。
2. 修改学习率差异
动态图模式下,与PyTorch代码没有差异。
静态图模式下,只能使用mindspore.ops.assign的方式修改学习率。
import mindspore
import mindspore as torch
optimizer = torch.optim.SGD([torch.nn.Parameter(torch.tensor(2.))], lr=0.01)
# 需要使用mindspore.ops.assign方式修改学习率
mindspore.ops.assign(optimizer.param_groups[0]['lr'], 0.1)
3. optimizer.step()的入参差异
PyTorch代码
...
net = Net()
loss = net(input)
loss.backward()
optimizer.step()
MindTorch代码
import mindspore
import mindspore.torch as torch
...
net = Net()
grad_fn = mindspore.ops.value_and_grad(net, None, optimizer.parameters)
grads = grad_fn(input) # 通过value_and_grad接口求梯度
optimizer.step(grads) # 需要将计算出的梯度grads作为参数传入step函数中
调用optimizer.step时仍需将梯度作为入参传入。
4. 自定义优化器差异
PyTorch代码
import torch
class Ranger(torch.optim.Optimizer):
def __init__(self, params, lr=1e-3. aplpha=0.5, k=6):
defaults = dict(lr=lr, alpha=alpha)
super().__init__(params, defaults)
self.k = k
def __setstate__(self, state):
print('set state called')
super().__setstate__(state)
def step(self, closure=None):
loss = None
for group in self.param_groups:
for p in group['params']:
if p.grad is None:
continue
grad = p.grad.data.float()
p_data_fp32 = p.data.float()
state = self.state[p]
state['step'] += 1
p+data_fp_32.add_(grad)
p.data.copy_(p_data_fp32)
return loss
MindTorch代码
import mindtorch.torch as torch
class Ranger(torch.optim.Optimizer):
def __init__(self, params, lr=1e-3. aplpha=0.5, k=6):
defaults = dict(lr=lr, alpha=alpha)
super().__init__(params, defaults)
self.k = k
def __setstate__(self, state):
print('set state called')
super().__setstate__(state)
def step(self, grads, closure=None): # 需要新增grads作为参数以传入梯度
loss = None
i = -1 # 声明索引来遍历grads入参
for group in self.param_groups:
for p in group['params']:
i = i + 1 # 索引递增
grad = grads[i]
p_data_fp32 = p.data.float()
state = self.state[p]
state['step'] += 1
p+data_fp_32.add_(grad)
p.data.copy_(p_data_fp32)
return loss
需要新增grads作为step函数输入
5. 自定义LRScheduler
动态图下修改方式与PyTorch一致。
静态图下需要对mindspore.ops.assign对学习率进行修改以保证优化器中学习率一直是Parameter类型。
class TransformerLrScheduler():
def __init__(self, optimizer, d_model, warmup_steps, multiplier=5):
self._optimizer = optimizer
self.d_model = d_model
self.warmup_steps = warmup_steps
self.n_steps = 0
self.multiplier = multiplier
def step(self):
self.n_steps += 1
lr = self._get_lr()
for param_group in self._optimizer.param_groups:
mindspore.ops.assign(param_group['lr'], lr)
def _get_lr(self):
return self.multiplier * (self.d_model ** -0.5) * min(self.n_steps ** (-0.5))
MindTorch微分接口适配
方式一
PyTorch代码
net = LeNet().to(config_args.device)
optimizer = torch.optim.SGD(net.parameters(), lr=0.01, momentum=0.9, weight_decay=0.0005)
net.train()
for i in range(epochs):
for X, y in train_data:
X, y = X.to(config_args.device), y.to(config_args.device)
out = net(X)
loss = criterion(out, y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
MindTorch代码
import mindtorch.torch as torch
import mindspore as ms
net = LeNet().to(config_args.device)
optimizer = torch.optim.SGD(net.parameters(), lr=0.01, momentum=0.9, weight_decay=0.0005)
# 定义前向过程:包含了模型网络接口调用以及损失函数调用
def forward_fn(data, label):
logits = net(data)
loss = criterion(logits, label)
return loss, logits
# 定义反向求导过程:包含了前向函数和参数
'''
mindspore.value_and_grad:生成求导函数,用于计算给定函数的正向计算结果和梯度。
函数求导包含以下三种场景:
对输入求导,此时 grad_position 非None,而 weights 是None;
对网络变量求导,此时 grad_position 是None,而 weights 非None;
同时对输入和网络变量求导,此时 grad_position 和 weights 都非None。
weights (Union[ParameterTuple, Parameter, list[Parameter]]) - 训练网络中需要返回梯度的网络变量。一般可通过 weights = net.trainable_params() 获取。默认值: None 。
has_aux (bool) - 是否返回辅助参数的标志。若为 True , fn 输出数量必须超过一个,其中只有 fn 第一个输出参与求导,其他输出值将直接返回。
'''
grad_fn = ms.ops.value_and_grad(forward_fn, None, optimizer.parameters, has_aux=True)
# 定义单步训练:反向梯度函数计算得到梯度,使用优化器传入梯度
def train_step(data, label):
(loss, _), grads = grad_fn(data, label)
optimizer(grads)
return loss
net.train()
# 数据迭代训练:循环训练数据,调用单步训练
for i in range(epochs):
for X, y in train_data:
X, y = X.to(config_args.device), y.to(config_args.device)
res = train_step(X, y)
方式二
MindTorch正在开发对标Tensor.backward()接口功能,用户无需修改迁移前torch源码,迁移效率更高。需要注意的是,该功能当前为实验特性,存在如下使用约束:
- 须用户配置环境变量export ENABLE_BACKWARD=1;
- 在动态图模式下使用ms.set_context(mode=PYNATIVE_MODE);
- 目前仅支持 Python3.7和 Python3.9环境下使用;
- 可能存在少数使用场景报错;
- 网络执行性能可能变慢。
MindTorch混合精度训练与适配
混合精度训练(Mixed Precision Training)是一种在深度学习模型训练中使用不同精度浮点数的技术,旨在充分利用低精度计算的优势,同时保持模型的数值稳定性和准确性。具体来说,混合精度训练通常结合了单精度浮点数(Float32)和半精度浮点数(Float16)。
基本原理
-
前向传播和反向传播:
- 前向传播:大部分计算使用半精度浮点数(Float16)进行,以减少内存占用和加快计算速度。
- 反向传播:同样使用半精度浮点数进行梯度计算。
-
权重更新:
- 权重和梯度的存储使用单精度浮点数(Float32),以确保数值稳定性。
- 在权重更新时,将半精度梯度转换为单精度,并与单精度权重进行更新。
-
损失缩放:
- 为了避免梯度下溢(即梯度过小而无法表示),通常会使用一种称为损失缩放的技术。损失缩放的基本思想是在反向传播之前将损失值乘以一个较大的常数(通常是2的幂次方),然后在更新权重之前再将梯度除以相同的常数。
PyTorch代码
from torch.cuda.amp import autocast, GradScaler
model = Net().cuda()
optimizer = torch.optim.SGD(net.parameters(), lr=0.01, momentum=0.9, weight_decay=0.0005)
scaler = GradScaler()
model.train()
for epoch in epochs:
for inpus, target in data:
optimizer.zero_grad()
with autocast():
output = model(input)
loss = loss_fn(output, target)
# 损失缩放
loss = scaler.scale(loss)
loss.backward()
# 反向缩放梯度
scaler.unscale_(optimizer)
# 梯度裁剪
torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm)
# 梯度更新
scaler.step(optimizer)
scaler.update()
MindTorch代码
import mindtorch.torch as torch
from mindtorch.torch.cuda.amp import GradScaler
from mindspore.amp import auto_mixed_precision
model = Net().cuda()
optimizer = torch.optim.SGD(net.parameters(), lr=0.01, momentum=0.9, weight_decay=0.0005)
scaler = GradScaler()
# model方法调用需要放在混合精度模型转换前
model.train()
# model为混合精度模型,需要对输出的tensor进行类型转换
model = auto_mixed_precision(model, '03') # 03为昇腾环境 02为GPU
def forward_fn(data, target):
logits = model(data)
logits = torch.cast_tp_adapter_tensor(logits)
loss = criterion(logits, target)
loss = scaler.scale(loss) # 损失缩放
return loss
grad_fn = ms.ops.value_and_grad(forward_fn, None, optimizer.parameters)
def train_step(data, target):
loss, grads = grad_fn(data, target)
return loss, grads
for epoch in epochs:
for inputs, target in data:
loss, grads = train_step(inputs, target)
scaler.unscale_(optimizer, grads) # 反向缩放梯度
grads = ms.ops.clip_by_global_norm(grads, max_norm) # 梯度裁剪
scaler.step(optimizer, grads) # 梯度更新
scaler.update() # 更新参数
- 调用auto_mixed_precision自动生成混合精度模型,如果 需要调用原始模型的方法请在混合精度模型生成前执行,如 model.train();
- 如果后续有对网络输出Tensor的操作,需调用 cast_to_adapter_tensor手动将输出Tensor转换为MindTorch Tensor。
- 调用GradScaler对梯度进行缩放时,由于自动微分机制和 接口区别,unscale_和step等接口需要把梯度grads作为入参传入。
MindTorch使用MindSpore并行训练
MindTorch使用MindSpore数据并行
import mindtorch.torch as torch
from mindtorch.torch.utils.data import Dataloader, DistributedSampler
from mindspore.communication import init
import mindspore as ms
init("hccl") # 初始化通信环境:“hccl"---Ascend,"nccl"---GPU,"mccl"---CPU
ms.set_auto_parallel_context(parallel mode=ms.Paral1e1Mode.DATA PARALLEL) # 配置数据并行模式torch.manual seed(1) #设置随机种子,使得每张卡上权重初始化值一样,便于收敛
train_images = datasets.CIFAR10('./',train=True, download=True, transform=transform)
sampler = DistributedSampler(train_images)#分布式数据处理
train_data = DataLoader(train_images, batch_size=32, num_workers=2, drop_last=True, sampler=sampler)
def forward_fn(data,label):
logits = net(data)
loss = criterion(logits,label)
return loss, logits
grad_fn = ms.ops.value_and_grad(forward_fn, None, optimizer.parameters, has_aux=True)
grad_reducer= nn.DistributedGradReducer(optimizer.parameters) #定义分布式优化器
def train_step(data,label):
(loss,_), grads = grad_fn(data, label) # 确度聚合grads=grad_reducer(grads)
optimizer(grads)
return loss
net.train()
for i in range(epochs):
for inputs, target in train_data:
res = train_step(inputs, target)
MindTorch使用MindSpore自动并行
import mindtorch.torch as torch
from mindtorch.torch.utils.data import Dataloader, DistributedSampler
from mindspore.communication import init
import mindspore as ms
# 自动并行仅支持静态图模式
ms.set_context(mode=ms.GRAPH_MODE, jit_syntax_level=True)
init("hccl") # 初始化通信环境:“hccl"---Ascend,"nccl"---GPU,"mccl"---CPU
ms.set_auto_parallel_context(parallel_mode=ms.Paral1e1Mode.AUTO_PARALLEL) # 配置数据并行模式torch.manual seed(1) #设置随机种子,使得每张卡上权重初始化值一样,便于收敛
train_images = datasets.CIFAR10('./',train=True, download=True, transform=transform)
sampler = DistributedSampler(train_images) # 分布式数据处理
train_data = DataLoader(train_images, batch_size=32, num_workers=2, drop_last=True, sampler=sampler)
def forward_fn(data,label):
logits = net(data)
loss = criterion(logits,label)
return loss, logits
grad_fn = ms.ops.value_and_grad(forward_fn, None, optimizer.parameters, has_aux=True)
@ms.hit
def train_step(data,label):
(loss,_), grads = grad_fn(data, label) # 确度聚合grads=grad_reducer(grads)
optimizer(grads)
return loss
net.train()
for i in range(epochs):
for inputs, target in train_data:
res = train_step(inputs, target)
MindTorch精度调优
Pytorch代码
import torch
from mindtorch.tools import debug_layer_info
net = Net()
net.load_state_dict(torch.load('pytorch.pth'))
net.eval()
debug_layer_info(net, frame='pytorch')
for X, y in data:
pred = net(x)
...
exit()
MindSpore代码
import mindtorch.torch as torch
from mindtorch.tools import debug_layer_info
net = Net()
net.load_state_dict(torch.load('pytorch.pth'))
net.eval()
debug_layer_info(net)
for X, y in data:
pred = net(X)
...
exit()
步骤 1:确保网络输入完全一致(可以使用固定的输入数据也可调用真实数据集)。
步骤 2:确保执行推理模式。
步骤 3:确保网络权重的一致性。
步骤 4:分别将PyTorch和MindTorch的模型推理结果打印出来进行比较,如果比较结果精度误差在1e-3范围内则表示迁移模型精度正常。
步骤 5:打印网络逐层信息协助定位精度异常。当出现网络输出误差过大情况,可以结合信息调试工具(debug_layer_info),检查各网络层输入输出的信息,便于快速定位导致精度异常的网络层,提升精度调试分析效率。同时也可以在动态图模式下基于关键位置添加断点,逐步缩小范围,直至明确误差是否合理。
MindTorch实践
在ModelArts平台创建算力资源后,打开命令行,克隆官方的教学仓库。
git clone https://openi.pcl.ac.cn/OpenI/mindtorch_tutorial.git
配置项目所需环境
cd mindtorch_tutorial/course_code/
sh requirements.sh
pip install torch torchvision
检查NPU运行情况
watch -n 1 npu-smi info
1. MindTorch快速上手
在保证原有PyTorch代码不变的基础上,在notebook的第一行添加如下语句:
from mindtorch.tools import mstorch_enable
即可实现PyTorch至MindTorch的快速迁移
python code1_quick_start.py
如果MindTorch运行时发生如下错误
原因:mindtorch版本与mindspore不匹配,需安装指定版本。
解决:
卸载现有版本
pip uninstall mindspore mindtorch
安装指定版本
MindSpore2.4.0
pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/2.4.0/MindSpore/unified/aarch64/mindspore-2.4.0-cp39-cp39-linux_aarch64.whl --trusted-host ms-release.obs.cn-north-4.myhuaweicloud.com -i https://pypi.tuna.tsinghua.edu.cn/simple
MindTorch0.4.0
pip install mindtorch
MindTorch目前不能直接使用PyTorch的Tensor.backword()微分方法
需要进行环境配置
export ENABLE_BACKWARD=1
然后可正常运行
微分求导的另一种方式为使用MindSpore的求导方法,这种方法需要对PyTorch代码进行一定修改
运行code3_grad.py函数即可查看如何使用MindSpore的函数方法进行微分。
使用MindTorch将PyTorch LeNet迁移,并使用MindSpore微分法
基于官方给出的样例代码进行修改成果实现使用MindSpore迁移LeNet
from mindtorch.tools import mstorch_enable
import torch
import torch.nn as nn
import torch.optim as optim
from torchvision.datasets.mnist import MNIST
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
import time
import mindspore as ms
import argparse
data_train = MNIST('./data/mnist',
download=True,
transform=transforms.Compose([
transforms.Resize((32, 32)),
transforms.ToTensor()]))
data_test = MNIST('./data/mnist',
train=False,
download=True,
transform=transforms.Compose([
transforms.Resize((32, 32)),
transforms.ToTensor()]))
train_data = DataLoader(data_train, batch_size=128, shuffle=True, num_workers=4, drop_last=True)
test_data = DataLoader(data_test, batch_size=128, num_workers=4, drop_last=True)
def train(config_args):
epochs = config_args.epoch
net = LeNet5().to(config_args.device)
optimizer = torch.optim.SGD(net.parameters(), lr=0.01, momentum=0.9, weight_decay=0.0005)
criterion = nn.CrossEntropyLoss()
# 前向函数
def forward_fn(data, label):
predicts = net(data)
loss = criterion(predicts, label)
return loss, predicts
grad_fn = ms.ops.value_and_grad(forward_fn, None, optimizer.parameters, has_aux=True)
def train_net(data, label):
(loss, _), grads = grad_fn(data, label)
optimizer(grads)
return loss
net.train()
print("begin training ......")
for i in range(epochs):
epoch_begin = time.time()
for X, y in train_data:
res = train_net(X, y)
print("---------------------->epoch:{}, loss:{:.2f}".format(i, res.asnumpy()))
print("--------------->epoch:{}, total time:{:.6f}".format(i, time.time() - epoch_begin))
torch.save(net.state_dict(), config_args.save_path)
def test(config_args):
net = LeNet5().to(config_args.device)
net.load_state_dict(torch.load(config_args.load_path), strict=True)
size = len(test_data.dataset)
num_batches = len(test_data)
net.eval()
test_loss, correct = 0, 0
print("begin testing ......")
with torch.no_grad(): # comment out this line for graph mode accelerating
for X, y in test_data:
X, y = X.to(config_args.device), y.to(config_args.device)
pred = net(X)
test_loss += criterion(pred, y).item()
correct += (pred.argmax(1) == y).to(torch.float).sum().item()
test_loss /= num_batches
correct /= size
print(f"Test Error: \n Accuracy: {(100 * correct):>0.1f}%, Avg loss: {test_loss:>8f} \n")
def run(mode='train'):
seed = 1
torch.manual_seed(seed)
# 直接定义参数值
class ConfigArgs:
def __init__(self):
self.mode = 'train' # 或者 'test'
self.device = 'Ascend' # 或者 'GPU', 'CPU'
self.epoch = 2
self.save_path = './lenet5.pth' # 修改保存路径为 LeNet5 的模型路径
self.load_path = './lenet5.pth' # 修改加载路径为 LeNet5 的模型路径
self.dataset = './'
config_args = ConfigArgs()
# 设置 MindSpore 上下文
if config_args.device.lower() in ("gpu", "cuda"):
ms.context.set_context(device_target="GPU")
print(f"Using GPU for model execution.")
elif config_args.device.lower() in ("cpu",):
ms.context.set_context(device_target="CPU")
print(f"Using CPU for model execution.")
elif config_args.device.lower() == "ascend":
ms.context.set_context(device_target="Ascend")
print(f"Using Ascend for model execution.")
else:
print("WARNING: '--device' configuration is abnormal, and the appropriate device will be adapted.")
ms.context.set_context(device_target="CPU")
print(f"Falling back to using CPU for model execution.")
# 继续执行训练或测试逻辑
if config_args.mode == 'train':
train(config_args)
elif config_args.mode == 'test':
test(config_args)
# # for graph mode accelerating
# ms.context.set_context(mode=ms.GRAPH_MODE)
# ms.set_context(jit_syntax_level=ms.STRICT)
if config_args.mode == 'train':
train(config_args)
elif config_args.mode == 'test':
test(config_args)
PyTorch LeNet模型架构:
import torch.nn as nn
from collections import OrderedDict
class C1(nn.Module):
def __init__(self):
super(C1, self).__init__()
self.c1 = nn.Sequential(OrderedDict([
('c1', nn.Conv2d(1, 6, kernel_size=(5, 5))),
('relu1', nn.ReLU()),
('s1', nn.MaxPool2d(kernel_size=(2, 2), stride=2))
]))
def forward(self, img):
output = self.c1(img)
return output
class C2(nn.Module):
def __init__(self):
super(C2, self).__init__()
self.c2 = nn.Sequential(OrderedDict([
('c2', nn.Conv2d(6, 16, kernel_size=(5, 5))),
('relu2', nn.ReLU()),
('s2', nn.MaxPool2d(kernel_size=(2, 2), stride=2))
]))
def forward(self, img):
output = self.c2(img)
return output
class C3(nn.Module):
def __init__(self):
super(C3, self).__init__()
self.c3 = nn.Sequential(OrderedDict([
('c3', nn.Conv2d(16, 120, kernel_size=(5, 5))),
('relu3', nn.ReLU())
]))
def forward(self, img):
output = self.c3(img)
return output
class F4(nn.Module):
def __init__(self):
super(F4, self).__init__()
self.f4 = nn.Sequential(OrderedDict([
('f4', nn.Linear(120, 84)),
('relu4', nn.ReLU())
]))
def forward(self, img):
output = self.f4(img)
return output
class F5(nn.Module):
def __init__(self):
super(F5, self).__init__()
self.f5 = nn.Sequential(OrderedDict([
('f5', nn.Linear(84, 10)),
('sig5', nn.LogSoftmax(dim=-1))
]))
def forward(self, img):
output = self.f5(img)
return output
class LeNet5(nn.Module):
"""
Input - 1x32x32
Output - 10
"""
def __init__(self):
super(LeNet5, self).__init__()
self.c1 = C1()
self.c2_1 = C2()
self.c2_2 = C2()
self.c3 = C3()
self.f4 = F4()
self.f5 = F5()
def forward(self, img):
output = self.c1(img)
x = self.c2_1(output)
output = self.c2_2(output)
output += x
output = self.c3(output)
output = output.view(img.size(0), -1)
output = self.f4(output)
output = self.f5(output)
return output
运行结果:
成功使用Ascend设备训练网络
原文地址:https://blog.csdn.net/Landy_Jay/article/details/143722879
免责声明:本站文章内容转载自网络资源,如本站内容侵犯了原著者的合法权益,可联系本站删除。更多内容请关注自学内容网(zxcms.com)!