去噪自编码器_DeepLearning 0.1 documentation中文翻译

2024-06-05 13:58

本文主要是介绍去噪自编码器_DeepLearning 0.1 documentation中文翻译,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

DeepLearning 0.1 documentation中文翻译: Denoising Autoencoders (DA)_去噪自编码器

原文网址: http://deeplearning.net/tutorial/dA.html

  • DeepLearning 01 documentation中文翻译 Denoising Autoencoders DA_去噪自编码器
    • 自编码器Autoencoders
    • 去噪自编码器Denoising Autoencoders
    • 整合程序Putting it All Together
    • 运行代码Running the Code
  • DeepLearning 01 documentation中文翻译


Note
这部分假设读者已经阅读过Classifying MNIST digits using Logistic RegressionMultilayer Perceptron. 此外, 本节会使用到新的Theano函数和概念: T.tanh, shared variables, basic arithmetic ops, T.grad, Random numbers, floatX. 如果你打算在GPU上运行,也要阅读GPU.
Note
代码可以在此下载:code

去噪自编码器是经典自编码器的扩展, 并且在文献[Vincent08]中, 它被作为深度网络的组件引入. 我们将下面简单介绍一下自编码器(Autoencoders).

自编码器(Autoencoders)

自编码器的概述参见文献[Bengio09]的4.6部分. 一个自编码器以 x[0,1]d 作为输入, 并且首先将输入通过一个确定性的映射, 映射为 (with an encoder)隐层表示 y[0,1]d , e.g.:

y=s(Wx+b)

其中, s 是一个非线性映射, 如sigmoid. 隐式表示 y, 或者叫编码, 紧接着被映射回来(with a decoder), 形成重构(Reconstruction) z , 它与 x 有同样的形状大小. 这个映射也是通过类似编码映射的变换, e.g.:

z=s(Wy+b)

(这里, 撇符号不表示矩阵转置.) z 应当被看作给定编码 y 时, 对 x 的预测. 逆映射的权重矩阵 W 可以选择约束成正向映射的转置: W=WT . 这被称为捆绑权重. 如果不使用捆绑的权重, 那么对这个模型的参数 (即 W,b,b , 和 W (不使用捆绑权重)) 进行优化, 以使得平均重构误差最小.

重构误差可以从很多方面来衡量, 这取决于在给定编码时,对输入的适当的分布假设. 可以用传统的均方误差 L(xz)=||xz||2 . 如果输入被解释为位向量或位概率(bit probabilities)向量, 那么可以使用输入与重构的交叉熵(cross-entropy)来衡量:

LH(x,z)=k=1d[xklogzk+(1xk)log(1zk)]

我们希望编码 y 是一个分布式表示, 它可以抓住数据变化主要因素的方向. 这类似于向主成分方向投影的方式, 都将获得数据变化的主要因素. 实际上, 如果有一个线性隐藏层(编码), 并且使用均方误差准则来训练网络, 那么, k 个隐藏层单元就是学习如何将输入投影到由数据的前k个主成分张成的子空间里(in the span of the first k principal components of the data). 如果隐藏层是非线性的, 自编码器与PCA表现不同, 它具有捕获数据分布多模态(multi-modal)的能力. 因而, 当我们构建深度自编码器[Hinton06]时, 我们会堆叠多个编码器(stacking multiple encoders, 及其对应的解码器), 此时, 就不能以PCA的观点来看待.

因为y x 的有损压缩, 所以对于所有的 x , y 不能很好(small-loss)的压缩 x . 优化自编码器模型, 使 y 成为训练样例的良好压缩, 并希望自编码器对其它的输入也能较好的压缩, 但不是说对任意的输入都有良好的压缩. 那么一个自编码器推广的意义就是: 一个自编码器在与训练样例(training examples)有相同分布的测试样例(test examples)上的重构误差低, 然而, 一般说来, 对在从输入空间随机抽取的样例上的重构误差很高.

我们想使用Theano, 以类的形式实现一个自编码器, 它最终会被用于构造堆叠式自编码器(也叫堆栈式stacked autoencoder). 第一步是, 为自编码器的参数 W,b b 创建共享变量(shared variables). (由于本教程中使用捆绑权重, W 取为 WT ):

class dA(object):"""Denoising Auto-Encoder class (dA)A denoising autoencoders tries to reconstruct the input from a corruptedversion of it by projecting it first in a latent space and reprojectingit afterwards back in the input space. Please refer to Vincent et al.,2008for more details. If x is the input then equation (1) computes a partiallydestroyed version of x by means of a stochastic mapping q_D. Equation (2)computes the projection of the input into the latent space. Equation (3)computes the reconstruction of the input, while equation (4) computes thereconstruction error... math::\tilde{x} ~ q_D(\tilde{x}|x)                                     (1)y = s(W \tilde{x} + b)                                           (2)x = s(W' y  + b')                                                (3)L(x,z) = -sum_{k=1}^d [x_k \log z_k + (1-x_k) \log( 1-z_k)]      (4)"""def __init__(self,numpy_rng,theano_rng=None,input=None,n_visible=784,n_hidden=500,W=None,bhid=None,bvis=None):"""Initialize the dA class by specifying the number of visible units (thedimension d of the input ), the number of hidden units ( the dimensiond' of the latent or hidden space ) and the corruption level. Theconstructor also receives symbolic variables for the input, weights andbias. Such a symbolic variables are useful when, for example the inputis the result of some computations, or when weights are shared betweenthe dA and an MLP layer. When dealing with SdAs this always happens,the dA on layer 2 gets as input the output of the dA on layer 1,and the weights of the dA are used in the second stage of trainingto construct an MLP.:type numpy_rng: numpy.random.RandomState:param numpy_rng: number random generator used to generate weights:type theano_rng: theano.tensor.shared_randomstreams.RandomStreams:param theano_rng: Theano random generator; if None is given one isgenerated based on a seed drawn from `rng`:type input: theano.tensor.TensorType:param input: a symbolic description of the input or None forstandalone dA:type n_visible: int:param n_visible: number of visible units:type n_hidden: int:param n_hidden:  number of hidden units:type W: theano.tensor.TensorType:param W: Theano variable pointing to a set of weights that should beshared belong the dA and another architecture; if dA shouldbe standalone set this to None:type bhid: theano.tensor.TensorType:param bhid: Theano variable pointing to a set of biases values (forhidden units) that should be shared belong dA and anotherarchitecture; if dA should be standalone set this to None:type bvis: theano.tensor.TensorType:param bvis: Theano variable pointing to a set of biases values (forvisible units) that should be shared belong dA and anotherarchitecture; if dA should be standalone set this to None"""self.n_visible = n_visibleself.n_hidden = n_hidden# create a Theano random generator that gives symbolic random valuesif not theano_rng:theano_rng = RandomStreams(numpy_rng.randint(2 ** 30))# note : W' was written as `W_prime` and b' as `b_prime`if not W:# W is initialized with `initial_W` which is uniformely sampled# from -4*sqrt(6./(n_visible+n_hidden)) and# 4*sqrt(6./(n_hidden+n_visible))the output of uniform if# converted using asarray to dtype# theano.config.floatX so that the code is runable on GPUinitial_W = numpy.asarray(numpy_rng.uniform(low=-4 * numpy.sqrt(6. / (n_hidden + n_visible)),high=4 * numpy.sqrt(6. / (n_hidden + n_visible)),size=(n_visible, n_hidden)),dtype=theano.config.floatX)W = theano.shared(value=initial_W, name='W', borrow=True)if not bvis:bvis = theano.shared(value=numpy.zeros(n_visible,dtype=theano.config.floatX),borrow=True)if not bhid:bhid = theano.shared(value=numpy.zeros(n_hidden,dtype=theano.config.floatX),name='b',borrow=True)self.W = W# b corresponds to the bias of the hiddenself.b = bhid# b_prime corresponds to the bias of the visibleself.b_prime = bvis# tied weights, therefore W_prime is W transposeself.W_prime = self.W.Tself.theano_rng = theano_rng# if no input is given, generate a variable representing the inputif input is None:# we use a matrix because we expect a minibatch of several# examples, each example being a rowself.x = T.dmatrix(name='input')else:self.x = inputself.params = [self.W, self.b, self.b_prime]

请注意, 我们传递给自编码器的输入参数是符号形式的, 这样, 我们可以连接自编码器的各层, 来形成深度网络: 第 k 层的符号形的输出, 将作为第k+1层符号形的输入.

现在, 我们可以计算隐层表示和信号的重构:

def get_hidden_values(self, input):""" Computes the values of the hidden layer """return T.nnet.sigmoid(T.dot(input, self.W) + self.b)
def get_reconstructed_input(self, hidden):"""Computes the reconstructed input given the values of thehidden layer"""return T.nnet.sigmoid(T.dot(hidden, self.W_prime) + self.b_prime)

并且, 使用这些函数, 可以计算代价和一次随机梯度下降步骤的更新:

def get_cost_updates(self, corruption_level, learning_rate):""" This function computes the cost and the updates for one trainngstep of the dA """tilde_x = self.get_corrupted_input(self.x, corruption_level)y = self.get_hidden_values(tilde_x)z = self.get_reconstructed_input(y)# note : we sum over the size of a datapoint; if we are using#        minibatches, L will be a vector, with one entry per#        example in minibatchL = - T.sum(self.x * T.log(z) + (1 - self.x) * T.log(1 - z), axis=1)# note : L is now a vector, where each element is the#        cross-entropy cost of the reconstruction of the#        corresponding example of the minibatch. We need to#        compute the average of all these to get the cost of#        the minibatchcost = T.mean(L)# compute the gradients of the cost of the `dA` with respect# to its parametersgparams = T.grad(cost, self.params)# generate the list of updatesupdates = [(param, param - learning_rate * gparam)for param, gparam in zip(self.params, gparams)]return (cost, updates)

现在, 我们可以定义用于迭代更新参数 W, bb_prime 的函数, 以使重构代价接近最小化:

da = dA(numpy_rng=rng,theano_rng=theano_rng,input=x,n_visible=28 * 28,n_hidden=500)cost, updates = da.get_cost_updates(corruption_level=0.,learning_rate=learning_rate)train_da = theano.function([index],cost,updates=updates,givens={x: train_set_x[index * batch_size: (index + 1) * batch_size]})

如果除了重建误差最小化外没有其它约束, 我们可以预料到的是: 一个 n 输入和n维(或更大)的自编码器, 将学习恒等函数, 几乎将输入映射成它自己的副本. 这样的一个自编码器, 不能区分来自其它输入配置的测试样例.

令人惊讶的是, 文献[ bengio07 ]的实验表明: 实际上, 隐藏层单元远多于输入(称为过完备, overcomplete)的非线性自编码器, 当采用随机梯度下降法训练时, 会产生有用表示(useful representations). ( 在这里, “有用”是指一个网络以编码作为输入, 分类错误率较低. )

一个简单的解释是, 早期停止迭代的随机梯度下降法与参数的 L2 正则化类似. 为了实现连续输入的完美重构, 一个单隐层非线性的自动编码器( 就像上面代码中的 ), 在第一( 编码 )层, 需要非常小的权重, 而将隐层单元的非线性转到线性性, 在第二( 解码 )层, 需要非常大的权重. 对于二值输入, 要完全最小化重构误差, 也需要非常大的权重. 隐式或显式的正则化, 使得得到非常大的权重解很困难, 优化算法仅仅对那些与训练集相似的样例有好的编码效果, 这也是我们想要的. 这意味着, 表示利用了训练集中的统计规律, 而不是仅仅学习复制输入.

也有其它方法, 来阻止隐藏层单元多于输入的一个自编码器, 学习恒等函数, 并让它在隐层学到有用的东西. 方法之一是加入稀疏性( 迫使许多隐藏层单元为零或接近零 )限制, 稀疏性已经广泛地被成功应用[ Ranzato07 ] [ Lee08 ]. 另一种方法是从在从输入到重构的转换中加入随机性. 这种技术被被用在受限波尔兹曼机( Restricted Boltzmann Machines (RBM), 在后面的受限波尔兹曼机( RBM )中会讨论 )中, 以及下面要讨论的去噪自动编码器中.

去噪自编码器(Denoising Autoencoders)

去噪自编码器背后的思想很简单. 为了迫使隐藏层单元发现更多鲁棒性好的特征, 以及阻止它学习恒等函数, 我们拿受损的输入来训练自编码器重构输入.

降噪自动编码器是自动编码器的随机版. 直观地说, 一个降噪自动编码器做两件事情: 试图对输入编码( 保存输入的信息 ), 和试图消除随机应用于自动编码器输入的损坏处理的影响. 后者则只能通过输入之间的统计相关性实现. 降噪自动编码器可以从不同的角度理解( 流形学习的角度, 随机算子的角度, 自底而上的信息理论的角度, 和自顶而下的生成模型的角度 ), 这些都在文献[ vincent08 ]中有解释. 关于自动编码器概述, 参见文献[ bengio09 ]的7.2节.

在文献[ vincent08 ]中, 随机损坏处理随机地将一些输入( 多达一半 )置零. 因此, 对于随机选择的丢失了特征的子集, 降噪自动编码器尝试根据未损坏( 即未丢失 )的值来预测损坏( 即丢失 )的值. 注意, 如何能够从剩余集合中预测任意变量的子集, 是完全获得一个集合的变量间的联合分布的充分条件( 这就是吉布斯采样(Gibbs sampling)的原理 ).

将自编码器类转换成去噪自编码器类, 我们需要做的是给输入增加一个随机损坏(stochastic corruption)操作. 输入可以以多种方式破坏, 但在本教程中, 我们将坚持使用原始的破坏机制, 即随机地将输入的某些项置零以掩盖这些输入. 下面的代码就是这样做的:

from theano.tensor.shared_randomstreams import RandomStreamsdef get_corrupted_input(self, input, corruption_level):""" This function keeps ``1-corruption_level`` entries of the inputs the sameand zero-out randomly selected subset of size ``coruption_level``Note : first argument of theano.rng.binomial is the shape(size) ofrandom numbers that it should producesecond argument is the number of trialsthird argument is the probability of success of any trialthis will produce an array of 0s and 1s where 1 has a probability of1 - ``corruption_level`` and 0 with ``corruption_level``"""return  self.theano_rng.binomial(size=input.shape, n=1, p=1 - corruption_level) * input

在堆叠式自编码器类(Stacked Autoencoders) 中, dA 类的权重, 必须与相应的 sigmoid 层的权重共享. 为此, dA 的构造函数也获得指向共享参数的Theano变量. 如果这些参数为空的话, 将会新建一个.

最终的去噪自编码器类变成这样:

class dA(object):"""Denoising Auto-Encoder class (dA)A denoising autoencoders tries to reconstruct the input from a corruptedversion of it by projecting it first in a latent space and reprojectingit afterwards back in the input space. Please refer to Vincent et al.,2008for more details. If x is the input then equation (1) computes a partiallydestroyed version of x by means of a stochastic mapping q_D. Equation (2)computes the projection of the input into the latent space. Equation (3)computes the reconstruction of the input, while equation (4) computes thereconstruction error... math::\tilde{x} ~ q_D(\tilde{x}|x)                                     (1)y = s(W \tilde{x} + b)                                           (2)x = s(W' y  + b')                                                (3)L(x,z) = -sum_{k=1}^d [x_k \log z_k + (1-x_k) \log( 1-z_k)]      (4)"""def __init__(self, numpy_rng, theano_rng=None, input=None, n_visible=784, n_hidden=500,W=None, bhid=None, bvis=None):"""Initialize the dA class by specifying the number of visible units (thedimension d of the input ), the number of hidden units ( the dimensiond' of the latent or hidden space ) and the corruption level. Theconstructor also receives symbolic variables for the input, weights andbias. Such a symbolic variables are useful when, for example the input isthe result of some computations, or when weights are shared between thedA and an MLP layer. When dealing with SdAs this always happens,the dA on layer 2 gets as input the output of the dA on layer 1,and the weights of the dA are used in the second stage of trainingto construct an MLP.:type numpy_rng: numpy.random.RandomState:param numpy_rng: number random generator used to generate weights:type theano_rng: theano.tensor.shared_randomstreams.RandomStreams:param theano_rng: Theano random generator; if None is given one is generatedbased on a seed drawn from `rng`:type input: theano.tensor.TensorType:paran input: a symbolic description of the input or None for standalonedA:type n_visible: int:param n_visible: number of visible units:type n_hidden: int:param n_hidden:  number of hidden units:type W: theano.tensor.TensorType:param W: Theano variable pointing to a set of weights that should beshared belong the dA and another architecture; if dA shouldbe standalone set this to None:type bhid: theano.tensor.TensorType:param bhid: Theano variable pointing to a set of biases values (forhidden units) that should be shared belong dA and anotherarchitecture; if dA should be standalone set this to None:type bvis: theano.tensor.TensorType:param bvis: Theano variable pointing to a set of biases values (forvisible units) that should be shared belong dA and anotherarchitecture; if dA should be standalone set this to None"""self.n_visible = n_visibleself.n_hidden = n_hidden# create a Theano random generator that gives symbolic random valuesif not theano_rng :theano_rng = RandomStreams(rng.randint(2 ** 30))# note : W' was written as `W_prime` and b' as `b_prime`if not W:# W is initialized with `initial_W` which is uniformely sampled# from -4.*sqrt(6./(n_visible+n_hidden)) and 4.*sqrt(6./(n_hidden+n_visible))# the output of uniform if converted using asarray to dtype# theano.config.floatX so that the code is runable on GPUinitial_W = numpy.asarray(numpy_rng.uniform(low=-4 * numpy.sqrt(6. / (n_hidden + n_visible)),high=4 * numpy.sqrt(6. / (n_hidden + n_visible)),size=(n_visible, n_hidden)), dtype=theano.config.floatX)W = theano.shared(value=initial_W, name='W')if not bvis:bvis = theano.shared(value = numpy.zeros(n_visible,dtype=theano.config.floatX), name='bvis')if not bhid:bhid = theano.shared(value=numpy.zeros(n_hidden,dtype=theano.config.floatX), name='bhid')self.W = W# b corresponds to the bias of the hiddenself.b = bhid# b_prime corresponds to the bias of the visibleself.b_prime = bvis# tied weights, therefore W_prime is W transposeself.W_prime = self.W.Tself.theano_rng = theano_rng# if no input is given, generate a variable representing the inputif input == None:# we use a matrix because we expect a minibatch of several examples,# each example being a rowself.x = T.dmatrix(name='input')else:self.x = inputself.params = [self.W, self.b, self.b_prime]def get_corrupted_input(self, input, corruption_level):""" This function keeps ``1-corruption_level`` entries of the inputs the sameand zero-out randomly selected subset of size ``coruption_level``Note : first argument of theano.rng.binomial is the shape(size) ofrandom numbers that it should producesecond argument is the number of trialsthird argument is the probability of success of any trialthis will produce an array of 0s and 1s where 1 has a probability of1 - ``corruption_level`` and 0 with ``corruption_level``"""return  self.theano_rng.binomial(size=input.shape, n=1, p=1 - corruption_level) * inputdef get_hidden_values(self, input):""" Computes the values of the hidden layer """return T.nnet.sigmoid(T.dot(input, self.W) + self.b)def get_reconstructed_input(self, hidden ):""" Computes the reconstructed input given the values of the hidden layer """return  T.nnet.sigmoid(T.dot(hidden, self.W_prime) + self.b_prime)def get_cost_updates(self, corruption_level, learning_rate):""" This function computes the cost and the updates for one trainngstep of the dA """tilde_x = self.get_corrupted_input(self.x, corruption_level)y = self.get_hidden_values( tilde_x)z = self.get_reconstructed_input(y)# note : we sum over the size of a datapoint; if we are using minibatches,#        L will  be a vector, with one entry per example in minibatchL = -T.sum(self.x * T.log(z) + (1 - self.x) * T.log(1 - z), axis=1 )# note : L is now a vector, where each element is the cross-entropy cost#        of the reconstruction of the corresponding example of the#        minibatch. We need to compute the average of all these to get#        the cost of the minibatchcost = T.mean(L)# compute the gradients of the cost of the `dA` with respect# to its parametersgparams = T.grad(cost, self.params)# generate the list of updatesupdates = []for param, gparam in zip(self.params, gparams):updates.append((param, param - learning_rate * gparam))return (cost, updates)

整合程序(Putting it All Together)

现在很容易构造一个dA类实例, 并训练它.

# allocate symbolic variables for the data
index = T.lscalar()  # index to a [mini]batch
x = T.matrix('x')  # the data is presented as rasterized images######################
# BUILDING THE MODEL #
######################rng = numpy.random.RandomState(123)
theano_rng = RandomStreams(rng.randint(2 ** 30))da = dA(numpy_rng=rng, theano_rng=theano_rng, input=x,n_visible=28 * 28, n_hidden=500)cost, updates = da.get_cost_updates(corruption_level=0.2,learning_rate=learning_rate)train_da = theano.function([index], cost, updates=updates,givens = {x: train_set_x[index * batch_size: (index + 1) * batch_size]})start_time = time.clock()############
# TRAINING #
############# go through training epochs
for epoch in xrange(training_epochs):# go through trainng setc = []for batch_index in xrange(n_train_batches):c.append(train_da(batch_index))print 'Training epoch %d, cost ' % epoch, numpy.mean(c)end_time = time.clocktraining_time = (end_time - start_time)print ('Training took %f minutes' % (pretraining_time / 60.))

为了看到网络到底学习到了什么, 我们准备绘制出这个滤波器(由权重矩阵定义的). 然而, 请记住, 这并不能代表所有, 因为我们忽略了偏置, 并将权重归一化到0和1之间.

为了绘制滤波器, 我们需要利用 tile_raster_images 函数(see Plotting Samples and Filters) , 因此我们希望读者自己熟悉它. 在Python 图像库( Python Image Library ) 的帮助下, 下面的这些代码行将把滤波器保存成图像:

image = Image.fromarray(tile_raster_images(X=da.W.get_value(borrow=True).T,img_shape=(28, 28), tile_shape=(10, 10),tile_spacing=(1, 1)))
image.save('filters_corruption_30.png')

运行代码(Running the Code)

运行代码 :

python dA.py

未加任何噪声, 得到的滤波器结果:
The resulted filters when we do not use any noise are :

加入30%的噪声时的滤波器结果:
The filters for 30 percent noise :



  • DeepLearning 01 documentation中文翻译 Denoising Autoencoders DA_去噪自编码器
    • 自编码器Autoencoders
    • 去噪自编码器Denoising Autoencoders
    • 整合程序Putting it All Together
    • 运行代码Running the Code
  • DeepLearning 01 documentation中文翻译


DeepLearning 0.1 documentation中文翻译

这篇关于去噪自编码器_DeepLearning 0.1 documentation中文翻译的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/1033249

相关文章

稀疏自编码器tensorflow

自编码器是一种无监督机器学习算法,通过计算自编码的输出与原输入的误差,不断调节自编码器的参数,最终训练出模型。自编码器可以用于压缩输入信息,提取有用的输入特征。如,[1,0,0,0],[0,1,0,0],[0,0,1,0],[0,0,0,1]四比特信息可以压缩成两位,[0,0],[1,0],[1,1],[0,1]。此时,自编码器的中间层的神经元个数为2。但是,有时中间隐藏层的神经元

【JavaScript】0.1 + 0.2 = 0.30000000000000004该怎样理解?

如果你以前没了解过类似的坑,乍一看似乎觉得不可思议。但是某些语言下事实确实如此(比如 Javascript): 再看个例子,+1 后居然等于原数,没天理啊! 如果你不知道原因,跟着楼主一起来探究下精度丢失的过程吧。 事实上不仅仅是 Javascript,在很多语言中 0.1 + 0.2 都会得到 0.30000000000000004,为此还诞生了一个好玩的网站 0.30000000

Day18_0.1基础学习MATLAB学习小技巧总结(18)——MATLAB绘图篇(1)

利用空闲时间把碎片化的MATLAB知识重新系统的学习一遍,为了在这个过程中加深印象,也为了能够有所足迹,我会把自己的学习总结发在专栏中,以便学习交流。 参考书目:《MATLAB基础教程 (第三版) (薛山)》 之前的章节都是基础的数据运算用法,对于功课来说更加重要的内容是建模、绘图、观察数据趋势,接下来我会结合自己的使用经验,来为大家分享绘图、建模使用的小技巧。 二维图形绘制 在本章开

TMC5271/TMC5272 支持使用编码器进行闭环位置控制

ADI-Trinamic推出两款新芯片产品,TMC5272和TMC5271。TMC5272是一颗2.1V 至 20V,2 x 0.8ARMS双轴步进驱动芯片。而且封装好小,为36 WLCSP (2.97mm x 3.13mm)封装。它集成Stealthchop、Spreadcycle两种斩波模式;还集成加减速算法,可通过配置寄存器方式控制电机转速、方向、和位移。 应用场合:VR,注射泵输液泵,安防

Day17_0.1基础学习MATLAB学习小技巧总结(17)——字符向量元胞数组

利用空闲时间把碎片化的MATLAB知识重新系统的学习一遍,为了在这个过程中加深印象,也为了能够有所足迹,我会把自己的学习总结发在专栏中,以便学习交流。 素材来源“数学建模清风” 特此说明:本博客的内容只在于总结在使用matlab中的一些小技巧,并非教程,若想系统的学习MATLAB,也可以移步去链接中的视频,观看学习。也欢迎各位在留言区补充,纠错,讨论。 原素材和学习视频地址:MATLAB教程

【数据应用案例】使用时空自编码器检测视频异常事件

案例来源:@阿里巴巴机器智能 案例地址:https://mp.weixin.qq.com/s/rUuaaBI3McesED3VVVbsBw   1. 目标:识别视频中的异常事件(如车祸)   2. 难点:正例数据量远远小于负例,同时正例之间的差异性很大,因此难以采用有监督方法进行训练。传统解决方法是使用无监督方法为正常视频建模,然后将异常值视为异常事件。   3. 解决思路:

127.0.0.1与本机ip的区别

127.0.0.1是回送地址,指本地机。 127.0.0.1是用来检测网络的自己的IP.就是说任何一台电脑来说,不管是否连接到INTERNET上,127.0.0.1对于自己来说都是自己.就是说,每台电脑都是由4位的256进制数组成的. 而192.168.1.102现在是本机,但本机也可以设置成其他ip地址,但127.0.0.1一定是指本机。 多人会接触到这个ip地址127.0.0.1。也许

【STM32】通用定时器TIM(编码器接口)

本篇博客重点在于标准库函数的理解与使用,搭建一个框架便于快速开发 目录 前言   编码器接口简介 正交编码器  编码器接口配置 初始化IO口 输入捕获配置 编码器接口初始化 编码器接口测速代码 Encoder.h Encoder.c main.c 前言   建议先阅读这篇博客,理解定时器输入捕获的配置和旋转编码器的使用 【STM32】通用定时器TIM(输

基于spark相关的DeepLearning选择

背景:TensorFlow性能差强人意,但DeepLearning领域优势,spark则反之。TensorFlow训练好的模型在spark中调用各种小问题不断。 基于spark相关的DeepLearning大致看了下面这些,各有优缺点: elephas:参考:https://github.com/maxpumperla/elephas dist-keras:参考:https://github

x264 编码器 AArch64汇编系列:quant 量化相关汇编函数

quant x264_quant_init函数中初始化时指向不同的具体实现: 以4x4块量化为例 c 语言版本实现 4x4 块量化:quant_4x4 #define QUANT_ONE( coef, mf, f ) \