(原)torch的训练过程

2023-09-06,

转载请注明出处:

http://www.cnblogs.com/darkknightzh/p/6221622.html

参考网址:

http://ju.outofmemory.cn/entry/284587

https://github.com/torch/nn/blob/master/doc/criterion.md

1. 使用updateParameters

假设已经有了model=setupmodel(自己建立的模型),同时也有自己的训练数据input,实际输出outReal,以及损失函数criterion(参见第二个网址),则使用torch训练过程如下:

 -- given model, criterion, input, outReal
model:training()
model:zeroGradParameters()
outPredict = model:forward(input)
err= criterion:forward(outPredict, outReal)
grad_criterion = criterion:backward(outPredict, outReal)
model:backward(input, grad_criterion)
model:updateParameters(learningRate)

上面第1行假定已知的参数

第2行设置为训练模式

第3行将model中每个模块保存的梯度清零(防止之前的干扰此次迭代)

第4行将输入input通过model,得到预测的输出outPredict

第5行通过损失函数计算在当前参数下模型的预测输出outPredict和实际输出outReal的误差err

第6行通过预测输出outPredict和实际输出outReal计算损失函数的梯度grad_criterion

第7行反向计算model中每个模块的梯度

第8行更新model每个模块的参数

每次迭代时,均需要执行第3行至第8行。

=========================================================

2. 使用optim

170301更新:

http://x-wei.github.io/learn-torch-6-optim.html

中给出了更方便的方式(是否方便也说不清楚),可以使用torch中的optim来更新参数(直接使用model:updateParameters的话,只能使用最简单的梯度下降算法,optmi中封装了很多算法,梯度下降,adam之类的)。

params_new, fs, ... = optim._method_(feval, params[, config][, state])

其中,param:当前参数向量(1D的tensro),在优化时会被更新

feval:用户自定义的闭包,类似于f, df/dx = feval(x)

config:一个包含算法参数(如learning rate)的table

state:包含状态变量的table

params_new:最小化函数f的新的结果参数(1D的tensor)

fs:a table of f values evaluated during the optimization, fs[#fs] is the optimized function value

注意:由于optmi需要输入数据为1D的tensor,因而需要将模型中的参数变成拉平,通过下面的函数来实现:

params, gradParams = model:getParameters()

params和gradParams均为1D的tensor。

使用上面的方法后,开始得程序可以修改为:

-- given model, criterion, input, outReal, optimState
local params, gradParams = model:getParameters() local function feval()
return criterion.output, gradParams
end for ...
model:training()
model:zeroGradParameters()
outPredict = model:forward(input)
err= criterion:forward(outPredict, outReal)
grad_criterion = criterion:backward(outPredict, outReal)
model:backward(input, grad_criterion) optim.sgd(feval, params, optimState)
end

170301更新结束

=========================================================

3. 使用model:backward注意的问题

170405更新

需要注意的是,model:backward一定要和model:forward对应。

https://github.com/torch/nn/blob/master/doc/module.md中[gradInput] backward(input, gradOutput)写着:

In general this method makes the assumption forward(input) has been called before, with the same input. This is necessary for optimization reasons. If you do not respect this rule, backward() will compute incorrect gradients.

应该是由于backward时,可能会使用forward的某些中间变量,因而backward执行前,必须先执行forward,否则中间变量和backward不对应,导致结果错误。

我这边之前的程序由于最初forward后,保存的是最后一次forward时的中间变量,因而backward时的结果总是不正确(见method5注释的那句)。

只能使用比较坑的方式解决,之前先forward,最终在backward之前,在forward一次,这样能保证结果正确(缺点就是增加了一次计算。。。),代码如method5。

说明:method1为常规的batch的方法。但是该方法对显存要求较高。因而可以使用类似caffe中的iter_size的方式,如method2的方法(和caffe中的iter_size不完全一样)。如果需要使用更多的样本,同时criterion时使用尽可能多的样本,则前两种方法均会出现问题,此时可以使用method3的方法(但是实际上method3有问题,loss收敛的很慢)。method4对method3进行了进一步的改进及测试,如果method4注释那两行,则其收敛正常,但是不注释那两行,则收敛出现问题,和method3收敛类似。method5进行了最终的改进。该程序能正常收敛。同时为了验证forward和backward要对应,将method5中注释的取消注释,同时注释掉上面一行,可以看出,其收敛很慢(和method3,4类似)。下面是各种method前10次的的收敛情况。

程序如下:

 require 'torch'
require 'nn'
require 'optim'
require 'cunn'
require 'cutorch'
local mnist = require 'mnist' local fullset = mnist.traindataset()
local testset = mnist.testdataset() local trainset = {
size = ,
data = fullset.data[{{,}}]:double(),
label = fullset.label[{{,}}]
}
trainset.data = trainset.data - trainset.data:mean()
trainset.data = trainset.data:cuda()
trainset.label = trainset.label:cuda() local validationset = {
size = ,
data = fullset.data[{{,}}]:double(),
label = fullset.label[{{,}}]
}
validationset.data = validationset.data - validationset.data:mean()
validationset.data = validationset.data:cuda()
validationset.label = validationset.label:cuda() local model = nn.Sequential()
model:add(nn.Reshape(, , ))
model:add(nn.MulConstant(/256.0*3.2))
model:add(nn.SpatialConvolutionMM(, , , , , , , ))
model:add(nn.SpatialMaxPooling(, , , , , ))
model:add(nn.SpatialConvolutionMM(, , , , , , , ))
model:add(nn.SpatialMaxPooling(, , , , , ))
model:add(nn.Reshape(**))
model:add(nn.Linear(**, ))
model:add(nn.ReLU())
model:add(nn.Linear(, ))
model:add(nn.LogSoftMax()) model = require('weight-init')(model, 'xavier')
model = model:cuda() x, dl_dx = model:getParameters() local criterion = nn.ClassNLLCriterion():cuda() local sgd_params = {
learningRate = 1e-2,
learningRateDecay = 1e-4,
weightDecay = 1e-3,
momentum = 1e-4
} local training = function(batchSize)
local current_loss =
local count =
local shuffle = torch.randperm(trainset.size)
batchSize = batchSize or
for t = , trainset.size-, batchSize do
-- setup inputs and targets for batch iteration
local size = math.min(t + batchSize, trainset.size) - t
local inputs = torch.Tensor(size, , ):cuda()
local targets = torch.Tensor(size):cuda()
for i = , size do
inputs[i] = trainset.data[shuffle[i+t]]
targets[i] = trainset.label[shuffle[i+t]] +
end local feval = function(x_new)
local miniBatchSize =
if x ~= x_new then x:copy(x_new) end -- reset data
dl_dx:zero() --[[ ------------------ method 1 original batch
local outputs = model:forward(inputs)
local loss = criterion:forward(outputs, targets)
local gradInput = criterion:backward(outputs, targets)
model:backward(inputs, gradInput)
--]] --[[ ------------------ method 2 iter-size with batch
local loss = 0
for idx = 1, batchSize, miniBatchSize do
local outputs = model:forward(inputs[{{idx, idx + miniBatchSize - 1}}])
loss = loss + criterion:forward(outputs, targets[{{idx, idx + miniBatchSize - 1}}])
local gradInput = criterion:backward(outputs, targets[{{idx, idx + miniBatchSize - 1}}])
model:backward(inputs[{{idx, idx + miniBatchSize - 1}}], gradInput)
end
dl_dx:mul(1.0 * miniBatchSize / batchSize)
loss = loss * miniBatchSize / batchSize
--]] --[[ ------------------ method 3 mini-batch in batch
local outputs = torch.Tensor(batchSize, 10):zero():cuda()
for idx = 1, batchSize, miniBatchSize do
outputs[{{idx, idx + miniBatchSize - 1}}]:copy(model:forward(inputs[{{idx, idx + miniBatchSize - 1}}]))
end
local loss = 0
for idx = 1, batchSize, miniBatchSize do
loss = loss + criterion:forward(outputs[{{idx, idx + miniBatchSize - 1}}],
targets[{{idx, idx + miniBatchSize - 1}}])
end
local gradInput = torch.Tensor(batchSize, 10):zero():cuda()
for idx = 1, batchSize, miniBatchSize do
gradInput[{{idx, idx + miniBatchSize - 1}}]:copy(criterion:backward(
outputs[{{idx, idx + miniBatchSize - 1}}], targets[{{idx, idx + miniBatchSize - 1}}]))
end
for idx = 1, batchSize, miniBatchSize do
model:backward(inputs[{{idx, idx + miniBatchSize - 1}}], gradInput[{{idx, idx + miniBatchSize - 1}}])
end
dl_dx:mul( 1.0 * miniBatchSize / batchSize)
loss = loss * miniBatchSize / batchSize
--]] --[[ ------------------ method 4 mini-batch in batch
local outputs = torch.Tensor(batchSize, 10):zero():cuda()
local loss = 0
local gradInput = torch.Tensor(batchSize, 10):zero():cuda()
for idx = 1, batchSize, miniBatchSize do
outputs[{{idx, idx + miniBatchSize - 1}}]:copy(model:forward(inputs[{{idx, idx + miniBatchSize - 1}}]))
loss = loss + criterion:forward(outputs[{{idx, idx + miniBatchSize - 1}}],
targets[{{idx, idx + miniBatchSize - 1}}])
gradInput[{{idx, idx + miniBatchSize - 1}}]:copy(criterion:backward(
outputs[{{idx, idx + miniBatchSize - 1}}], targets[{{idx, idx + miniBatchSize - 1}}]))
-- end
-- for idx = 1, batchSize, miniBatchSize do
model:backward(inputs[{{idx, idx + miniBatchSize - 1}}], gradInput[{{idx, idx + miniBatchSize - 1}}])
end dl_dx:mul( 1.0 * miniBatchSize / batchSize)
loss = loss * miniBatchSize / batchSize
--]] ---[[ ------------------ method 5 mini-batch in batch
local loss =
local gradInput = torch.Tensor(batchSize, ):zero():cuda() for idx = , batchSize, miniBatchSize do
local outputs = model:forward(inputs[{{idx, idx + miniBatchSize - }}])
loss = loss + criterion:forward(outputs, targets[{{idx, idx + miniBatchSize - }}])
gradInput[{{idx, idx + miniBatchSize - }}]:copy(criterion:backward(outputs, targets[{{idx, idx + miniBatchSize - }}]))
end for idx = , batchSize, miniBatchSize do
model:forward(inputs[{{idx, idx + miniBatchSize - }}])
--model:forward(inputs[{{batchSize - miniBatchSize + 1, batchSize}}])
model:backward(inputs[{{idx, idx + miniBatchSize - }}], gradInput[{{idx, idx + miniBatchSize - }}])
end dl_dx:mul( 1.0 * miniBatchSize / batchSize)
loss = loss * miniBatchSize / batchSize
--]] return loss, dl_dx
end _, fs = optim.sgd(feval, x, sgd_params) count = count +
current_loss = current_loss + fs[]
end return current_loss / count -- normalize loss
end local eval = function(dataset, batchSize)
local count =
batchSize = batchSize or for i = , dataset.size, batchSize do
local size = math.min(i + batchSize - , dataset.size) - i
local inputs = dataset.data[{{i,i+size-}}]:cuda()
local targets = dataset.label[{{i,i+size-}}]
local outputs = model:forward(inputs)
local _, indices = torch.max(outputs, )
indices:add(-)
indices = indices:cuda()
local guessed_right = indices:eq(targets):sum()
count = count + guessed_right
end return count / dataset.size
end local max_iters =
local last_accuracy =
local decreasing =
local threshold = -- how many deacreasing epochs we allow
for i = ,max_iters do
-- timer = torch.Timer() model:training()
local loss = training() model:evaluate()
local accuracy = eval(validationset)
print(string.format('Epoch: %d Current loss: %4f; validation set accu: %4f', i, loss, accuracy))
if accuracy < last_accuracy then
if decreasing > threshold then break end
decreasing = decreasing +
else
decreasing =
end
last_accuracy = accuracy --print(' Time elapsed: ' .. i .. 'iter: ' .. timer:time().real .. ' seconds')
end testset.data = testset.data:double()
eval(testset)

weight-init.lua

 --
-- Different weight initialization methods
--
-- > model = require('weight-init')(model, 'heuristic')
--
require("nn") -- "Efficient backprop"
-- Yann Lecun, 1998
local function w_init_heuristic(fan_in, fan_out)
return math.sqrt(/(*fan_in))
end -- "Understanding the difficulty of training deep feedforward neural networks"
-- Xavier Glorot, 2010
local function w_init_xavier(fan_in, fan_out)
return math.sqrt(/(fan_in + fan_out))
end -- "Understanding the difficulty of training deep feedforward neural networks"
-- Xavier Glorot, 2010
local function w_init_xavier_caffe(fan_in, fan_out)
return math.sqrt(/fan_in)
end -- "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification"
-- Kaiming He, 2015
local function w_init_kaiming(fan_in, fan_out)
return math.sqrt(/(fan_in + fan_out))
end local function w_init(net, arg)
-- choose initialization method
local method = nil
if arg == 'heuristic' then method = w_init_heuristic
elseif arg == 'xavier' then method = w_init_xavier
elseif arg == 'xavier_caffe' then method = w_init_xavier_caffe
elseif arg == 'kaiming' then method = w_init_kaiming
else
assert(false)
end -- loop over all convolutional modules
for i = , #net.modules do
local m = net.modules[i]
if m.__typename == 'nn.SpatialConvolution' then
m:reset(method(m.nInputPlane*m.kH*m.kW, m.nOutputPlane*m.kH*m.kW))
elseif m.__typename == 'nn.SpatialConvolutionMM' then
m:reset(method(m.nInputPlane*m.kH*m.kW, m.nOutputPlane*m.kH*m.kW))
elseif m.__typename == 'cudnn.SpatialConvolution' then
m:reset(method(m.nInputPlane*m.kH*m.kW, m.nOutputPlane*m.kH*m.kW))
elseif m.__typename == 'nn.LateralConvolution' then
m:reset(method(m.nInputPlane**, m.nOutputPlane**))
elseif m.__typename == 'nn.VerticalConvolution' then
m:reset(method(*m.kH*m.kW, *m.kH*m.kW))
elseif m.__typename == 'nn.HorizontalConvolution' then
m:reset(method(*m.kH*m.kW, *m.kH*m.kW))
elseif m.__typename == 'nn.Linear' then
m:reset(method(m.weight:size(), m.weight:size()))
elseif m.__typename == 'nn.TemporalConvolution' then
m:reset(method(m.weight:size(), m.weight:size()))
end if m.bias then
m.bias:zero()
end
end
return net
end return w_init
Method 1

Epoch: 1 Current loss: 0.616950; validation set accu: 0.920900
Epoch: 2 Current loss: 0.228665; validation set accu: 0.942400
Epoch: 3 Current loss: 0.168047; validation set accu: 0.957900
Epoch: 4 Current loss: 0.134796; validation set accu: 0.961800
Epoch: 5 Current loss: 0.113071; validation set accu: 0.966200
Epoch: 6 Current loss: 0.098782; validation set accu: 0.968800
Epoch: 7 Current loss: 0.088252; validation set accu: 0.970000
Epoch: 8 Current loss: 0.080225; validation set accu: 0.971200
Epoch: 9 Current loss: 0.073702; validation set accu: 0.972200
Epoch: 10 Current loss: 0.068171; validation set accu: 0.972400 method 2
Epoch: 1 Current loss: 0.624633; validation set accu: 0.922200
Epoch: 2 Current loss: 0.238459; validation set accu: 0.945200
Epoch: 3 Current loss: 0.174089; validation set accu: 0.959000
Epoch: 4 Current loss: 0.140234; validation set accu: 0.963800
Epoch: 5 Current loss: 0.116498; validation set accu: 0.968000
Epoch: 6 Current loss: 0.101376; validation set accu: 0.968800
Epoch: 7 Current loss: 0.089484; validation set accu: 0.972600
Epoch: 8 Current loss: 0.080812; validation set accu: 0.973000
Epoch: 9 Current loss: 0.073929; validation set accu: 0.975100
Epoch: 10 Current loss: 0.068330; validation set accu: 0.975400 method 3
Epoch: 1 Current loss: 2.202240; validation set accu: 0.548500
Epoch: 2 Current loss: 2.049710; validation set accu: 0.669300
Epoch: 3 Current loss: 1.993560; validation set accu: 0.728900
Epoch: 4 Current loss: 1.959818; validation set accu: 0.774500
Epoch: 5 Current loss: 1.945992; validation set accu: 0.757600
Epoch: 6 Current loss: 1.930599; validation set accu: 0.809600
Epoch: 7 Current loss: 1.911803; validation set accu: 0.837200
Epoch: 8 Current loss: 1.904754; validation set accu: 0.842100
Epoch: 9 Current loss: 1.903705; validation set accu: 0.846400
Epoch: 10 Current loss: 1.903911; validation set accu: 0.848100 method 4
Epoch: 1 Current loss: 0.624240; validation set accu: 0.924900
Epoch: 2 Current loss: 0.213469; validation set accu: 0.948500
Epoch: 3 Current loss: 0.156797; validation set accu: 0.959800
Epoch: 4 Current loss: 0.126438; validation set accu: 0.963900
Epoch: 5 Current loss: 0.106664; validation set accu: 0.965900
Epoch: 6 Current loss: 0.094166; validation set accu: 0.967200
Epoch: 7 Current loss: 0.084848; validation set accu: 0.971200
Epoch: 8 Current loss: 0.077244; validation set accu: 0.971800
Epoch: 9 Current loss: 0.071417; validation set accu: 0.973300
Epoch: 10 Current loss: 0.065737; validation set accu: 0.971600 取消注释
Epoch: 1 Current loss: 2.178319; validation set accu: 0.542200
Epoch: 2 Current loss: 2.031493; validation set accu: 0.648700
Epoch: 3 Current loss: 1.982282; validation set accu: 0.703700
Epoch: 4 Current loss: 1.956709; validation set accu: 0.762700
Epoch: 5 Current loss: 1.927590; validation set accu: 0.808100
Epoch: 6 Current loss: 1.924535; validation set accu: 0.817200
Epoch: 7 Current loss: 1.911364; validation set accu: 0.820100
Epoch: 8 Current loss: 1.898206; validation set accu: 0.855400
Epoch: 9 Current loss: 1.885394; validation set accu: 0.836500
Epoch: 10 Current loss: 1.880787; validation set accu: 0.870200 method 5 Epoch: 1 Current loss: 0.619814; validation set accu: 0.924300
Epoch: 2 Current loss: 0.232870; validation set accu: 0.948800
Epoch: 3 Current loss: 0.172606; validation set accu: 0.954900
Epoch: 4 Current loss: 0.137763; validation set accu: 0.961800
Epoch: 5 Current loss: 0.116268; validation set accu: 0.967700
Epoch: 6 Current loss: 0.101985; validation set accu: 0.968800
Epoch: 7 Current loss: 0.091154; validation set accu: 0.970900
Epoch: 8 Current loss: 0.083219; validation set accu: 0.972700
Epoch: 9 Current loss: 0.074921; validation set accu: 0.972800
Epoch: 10 Current loss: 0.070208; validation set accu: 0.972800 取消注释,同时注释上面一行 Epoch: 1 Current loss: 2.161032; validation set accu: 0.497500
Epoch: 2 Current loss: 2.027255; validation set accu: 0.690900
Epoch: 3 Current loss: 1.972939; validation set accu: 0.767600
Epoch: 4 Current loss: 1.940982; validation set accu: 0.766000
Epoch: 5 Current loss: 1.933135; validation set accu: 0.812800
Epoch: 6 Current loss: 1.913039; validation set accu: 0.799300
Epoch: 7 Current loss: 1.896871; validation set accu: 0.848800
Epoch: 8 Current loss: 1.899655; validation set accu: 0.854400
Epoch: 9 Current loss: 1.889465; validation set accu: 0.845700
Epoch: 10 Current loss: 1.878703; validation set accu: 0.846400

170301更新结束

=========================================================

(原)torch的训练过程的相关教程结束。

《(原)torch的训练过程.doc》

下载本文的Word格式文档,以方便收藏与打印。