本文主要是介绍theano中训练方法和模型的一些写法,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
按照这theano的tutorial开始跟着写了,因为去年年底之前学习过一段时间,但是当时时间少,并且很多地方也没搞懂,大多都是看着书来模仿,结果出错不好找地方之外,自己如果根据自己的想法随便写下结果就出错了。这几天好好的学习了下。我来总结下
在softmax或者逻辑回归的代码中:
1:在教程中写法是,先写一个类,在类的init方法中初始化w和b,以及计算概率。
2:然后分别再另外的函数中计算cost和error。
3:在类外来进行训练。
以上方法由于在init方法中,要进行概率计算,所以在初始化类的时候至少要传递进去x,在计算cost和error的时候要传递进去y。所以如果此时训练方法写在类内的话就不行。因为数据也是在训练方法内生成的,在类别没办法传递进去(此时,你也许会说在类外面单独加载数据怎么样?因为训练方法内其实传递进去的是数据的索引,所以这个在外面单独加载数据的话也不太现实。所以加载数据还是和训练方法写在一起是最方便的)
为了解决以上问题,很容易,就是把init方法中,把在类外面不能传递的参数,全部去掉,然后在别的计算的地方用到的话,再传递进去。比如把概率预测拿出来,写成一个函数,把x传递进去。然后其余的保持不变。这样在训练方法内,因为有对x和y的声明,以及对应数据的传递进去,所以不会出问题
我这里写好了一个,发上来作为备忘吧,为下一步的更高层次的封装做准备:
import numpy, theano, theano.tensor as T, gzip, cPickleclass NN():def __init__(self, n_in, n_out):self.w = theano.shared(numpy.asarray(numpy.zeros([n_in, n_out]), theano.config.floatX))self.b = theano.shared(numpy.asarray(numpy.zeros(n_out), theano.config.floatX))def get_probalblity(self, x):return T.nnet.softmax(T.dot(x, self.w) + self.b) def get_prediction(self, x, y):return T.argmax(self.get_probalblity(x), 1)def cost(self, x, y):p_y_given_x = self.get_probalblity(x)return -T.mean(T.log(p_y_given_x[T.arange(y.shape[0]), y]))def error(self, x, y):prediction = self.get_prediction(x, y)return T.mean(T.neq(prediction, y))def load_data(self):f = gzip.open('mnist.pkl.gz')trainxy, validatexy, testxy = cPickle.load(f)def share_data(xy):x,y = xyx = theano.shared(numpy.asarray(x, theano.config.floatX))y = theano.shared(numpy.asarray(y, theano.config.floatX))return [x, T.cast(y, 'int32')]trainx, trainy = share_data(trainxy)validatex,validatey = share_data(validatexy)testx, testy = share_data(testxy)return [(trainx,trainy),(validatex,validatey),(testx,testy)]def train(self):x = T.matrix('x', theano.config.floatX)y = T.ivector('y')[(trainx,trainy),(validatex,validatey),(testx,testy)] = self.load_data()gw,gb = T.grad(self.cost(x,y), [self.w, self.b])index = T.lscalar()batch_size = 600trainModel = theano.function([index], self.cost(x,y), updates=[(self.w, self.w-0.13*gw), (self.b, self.b-0.13*gb)], givens={x:trainx[index*batch_size:(index+1)*batch_size], y:trainy[index*batch_size:(index+1)*batch_size]})validateModel = theano.function([index], self.error(x,y), givens={x:validatex[index*batch_size:(index+1)*batch_size], y:validatey[index*batch_size:(index+1)*batch_size]})testModel = theano.function([index], self.error(x,y), givens={x:testx[index*batch_size:(index+1)*batch_size], y:testy[index*batch_size:(index+1)*batch_size]})best_validate_error = numpy.Infbest_test_error = 0patience = 5000increasement = 2train_batchs = trainx.get_value().shape[0]/batch_sizevalidate_batchs = validatex.get_value().shape[0]/batch_sizetest_batchs = testx.get_value().shape[0]/batch_sizevalidate_frequency = min(patience/2, train_batchs)epochs = 1000epoch = 1ite = 0stopping = Falsewhile (epoch < epochs) and (not stopping):for i in xrange(train_batchs):ite += 1this_cost = trainModel(i)if ite%validate_frequency == 0:this_validate_error = numpy.mean([validateModel(j) for j in xrange(validate_batchs)])print ('ite:%d/%d, cost:%f, validate:%f'%(ite, epoch, this_cost, this_validate_error)) if this_validate_error < best_validate_error:if this_validate_error < 0.995*best_validate_error:patience = max(patience, ite*increasement)this_test_error = numpy.mean([testModel(j) for j in xrange(test_batchs)])best_validate_error = this_validate_errorbest_test_error = this_test_errorprint ('ite:%d/%d, cost:%f, validate:%f, test:%f'%(ite, epoch, this_cost, this_validate_error, this_test_error)) if patience <= ite:stopping = Truebreakepoch +=1print ('best validate error:%f, best test error:%f'%(best_validate_error, best_test_error))if __name__ == '__main__':nn = NN(784, 10)nn.train()
以上训练结果,和tutorial给出的基本一致。
。。。。。
ite:5810/70, cost:0.329380, validate:0.075104
ite:5810/70, cost:0.329380, validate:0.075104, test:0.075000
ite:5893/71, cost:0.329054, validate:0.075208
ite:5976/72, cost:0.328735, validate:0.075104
ite:5976/72, cost:0.328735, validate:0.075104, test:0.075104
ite:6059/73, cost:0.328422, validate:0.075000
ite:6059/73, cost:0.328422, validate:0.075000, test:0.074896
ite:6142/74, cost:0.328116, validate:0.074792
ite:6142/74, cost:0.328116, validate:0.074792, test:0.074896
best validate error:0.074792, best test error:0.074896
这篇关于theano中训练方法和模型的一些写法的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!