本文将使用迁移学习训练一个分类网络。参考自pytorch tutorial。
迁移学习主要有两种方案:
Finetuning the convent
使用一个预训练的网络进行初始化(以前是随机初始化),如在imagenet上预训练的网络。ConvNet as fixed feature extractor
将所有的卷积层参数固定,只训练后面的全连接层。
本文需要掌握的用法如下:
获取样本名称
dataset.classes
1
class_names = image_datasets['train'].classes
torch.tensor.numpy()
Returns self tensor as a NumPy ndarray. This tensor and the returned ndarray share the same underlying storage. Changes to self tensor will be reflected in the ndarray and vice versa.1
2
3
4a = torch.tensor([1, 2])
b = a.numpy()
a[0] = 2
b # [2, 2]img = np.clip(img, 0, 1)
将数据钳位tensor.double, tensor.float, tensor.int
返回转化到对应类型的tensormodel.state_dict()
Returns a dictionary containing a whole state of the module.
Both parameters and persistent buffers (e.g. running averages) are
included. Keys are corresponding parameter and buffer names.torch.optim.lr_scheduler
提供了几种方法来根据epoches的数量调整学习率。包括LambdaLR
,StepLR
,MultiStepLR
,ExponentialLR
,ReduceLROnPlateau
1
2
3
4
5scheduler = StepLR(optimizer, step_size=30, gamma=0.1)
for epoch in range(100):
scheduler.step()
train(...)
validate(...)
1 | import torch |
1 | phases = ('train', 'val') |
1 | def imshow(img, title=None): |
1 | def train(model, optimizer, criterion, scheduler, epoch_num=20): |
1 | def visualize_predict(model, num_images=6): |
1 | model_ft = models.resnet18(pretrained=True) |
1 | model_ft = train(model_ft, optimizer_ft, criterion, exp_lr_scheduler) |
Epoch 0/19
----------
train Loss: 0.6718, Acc: 0.6762
val Loss: 0.2909, Acc: 0.8758
Epoch 1/19
----------
train Loss: 0.4903, Acc: 0.7664
val Loss: 0.2409, Acc: 0.8824
Epoch 2/19
----------
train Loss: 0.4883, Acc: 0.8074
val Loss: 0.4210, Acc: 0.8301
Epoch 3/19
----------
train Loss: 0.6792, Acc: 0.7008
val Loss: 0.3024, Acc: 0.8758
Epoch 4/19
----------
train Loss: 0.4854, Acc: 0.7951
val Loss: 0.5967, Acc: 0.8105
Epoch 5/19
----------
train Loss: 0.6275, Acc: 0.7951
val Loss: 0.2365, Acc: 0.9150
Epoch 6/19
----------
train Loss: 0.7099, Acc: 0.7582
val Loss: 0.4348, Acc: 0.8758
Epoch 7/19
----------
train Loss: 0.3858, Acc: 0.8730
val Loss: 0.2605, Acc: 0.9216
Epoch 8/19
----------
train Loss: 0.3358, Acc: 0.8607
val Loss: 0.2593, Acc: 0.9281
Epoch 9/19
----------
train Loss: 0.3761, Acc: 0.8361
val Loss: 0.2400, Acc: 0.9150
Epoch 10/19
----------
train Loss: 0.3718, Acc: 0.8525
val Loss: 0.2041, Acc: 0.9346
Epoch 11/19
----------
train Loss: 0.3000, Acc: 0.8811
val Loss: 0.1988, Acc: 0.9346
Epoch 12/19
----------
train Loss: 0.4123, Acc: 0.8484
val Loss: 0.2236, Acc: 0.9150
Epoch 13/19
----------
train Loss: 0.2699, Acc: 0.9057
val Loss: 0.1881, Acc: 0.9477
Epoch 14/19
----------
train Loss: 0.3591, Acc: 0.8648
val Loss: 0.1844, Acc: 0.9412
Epoch 15/19
----------
train Loss: 0.2312, Acc: 0.9180
val Loss: 0.2003, Acc: 0.9281
Epoch 16/19
----------
train Loss: 0.2395, Acc: 0.8852
val Loss: 0.3438, Acc: 0.8824
Epoch 17/19
----------
train Loss: 0.2238, Acc: 0.9139
val Loss: 0.2004, Acc: 0.9216
Epoch 18/19
----------
train Loss: 0.2948, Acc: 0.8730
val Loss: 0.1862, Acc: 0.9346
Epoch 19/19
----------
train Loss: 0.2991, Acc: 0.8648
val Loss: 0.2627, Acc: 0.9216
Traing complete in 0m 32s
Best val Acc: 0.9477
1 | visualize_predict(model_ft) |
1 | model_conv = torchvision.models.resnet18(pretrained=True) |
1 | model_conv = train(model_conv, optimizer_conv, criterion, exp_lr_scheduler) |
Epoch 0/19
----------
train Loss: 0.7631, Acc: 0.5615
val Loss: 0.2695, Acc: 0.9020
Epoch 1/19
----------
train Loss: 0.7224, Acc: 0.7254
val Loss: 0.3596, Acc: 0.8758
Epoch 2/19
----------
train Loss: 0.3871, Acc: 0.8361
val Loss: 0.1821, Acc: 0.9346
Epoch 3/19
----------
train Loss: 0.3875, Acc: 0.8115
val Loss: 0.1709, Acc: 0.9542
Epoch 4/19
----------
train Loss: 0.4280, Acc: 0.7787
val Loss: 0.2612, Acc: 0.8954
Epoch 5/19
----------
train Loss: 0.3769, Acc: 0.8525
val Loss: 0.1932, Acc: 0.9477
Epoch 6/19
----------
train Loss: 0.5753, Acc: 0.7705
val Loss: 0.3275, Acc: 0.8889
Epoch 7/19
----------
train Loss: 0.3103, Acc: 0.8730
val Loss: 0.1959, Acc: 0.9346
Epoch 8/19
----------
train Loss: 0.3225, Acc: 0.8566
val Loss: 0.1967, Acc: 0.9477
Epoch 9/19
----------
train Loss: 0.4565, Acc: 0.7992
val Loss: 0.2357, Acc: 0.9216
Epoch 10/19
----------
train Loss: 0.3099, Acc: 0.8648
val Loss: 0.1959, Acc: 0.9477
Epoch 11/19
----------
train Loss: 0.3562, Acc: 0.8443
val Loss: 0.2094, Acc: 0.9216
Epoch 12/19
----------
train Loss: 0.3817, Acc: 0.8320
val Loss: 0.2029, Acc: 0.9216
Epoch 13/19
----------
train Loss: 0.2989, Acc: 0.8689
val Loss: 0.2207, Acc: 0.9216
Epoch 14/19
----------
train Loss: 0.3139, Acc: 0.8770
val Loss: 0.2097, Acc: 0.9412
Epoch 15/19
----------
train Loss: 0.3140, Acc: 0.8566
val Loss: 0.2014, Acc: 0.9281
Epoch 16/19
----------
train Loss: 0.3595, Acc: 0.8238
val Loss: 0.1981, Acc: 0.9346
Epoch 17/19
----------
train Loss: 0.3495, Acc: 0.8484
val Loss: 0.2064, Acc: 0.9150
Epoch 18/19
----------
train Loss: 0.3043, Acc: 0.8525
val Loss: 0.2191, Acc: 0.9216
Epoch 19/19
----------
train Loss: 0.3634, Acc: 0.8402
val Loss: 0.1821, Acc: 0.9477
Traing complete in 0m 22s
Best val Acc: 0.9542
1 | visualize_predict(model_conv) |