Exploring the teaching of deep learning in neural networks

2023-10-18 04:20

本文主要是介绍Exploring the teaching of deep learning in neural networks,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

翻译:探索神经网络深度学习的教学 -李睿凡,王小捷,钟义信
摘自:超星期刊 第19期 2014年10月10日
栏目:计算机教育
文章编号:1672-5913(2014)19-0077-03
中图分类号:G642

原文
这里写图片描述
这里写图片描述
这里写图片描述


译文:

Exploring the teaching of deep learning in neural networks
Li Ruifan1,2 , Wang Xiaojie 1,2, Zhong Yixin 1

  1. School of Computer Science and Technology, Beijing University of Posts and Telecommunications, Beijing 100876, China;
  2. Information Network Engineering Research Center, Ministry of Education, Beijing Dianli University, Beijing 100876, China

Abstract:
  Deep learning is the latest breakthrough in the field of intelligent scienc-e and technology, which will be the basic concepts, models, and metho-ds of deep learning.It is urgent to introduce the basic concepts, models and methods of deep learning into intelligent science and technology c-ourses.The article discusses how to effectively carry out deep learning teaching in undergraduate and graduate courses and introduce the latest research results in the field of intelligence to the highly innovative und-ergraduate and graduate groups.In order to be able to reach the forefr-ont of the discipline early, enhance the interest in intelligent science an-d technology, and stimulate the spirit of innov-ation.

Key Words: intelligent science and technology; deep learning; teaching advice

0 Preface
  The main specialized courses in intelligent science and technology inclu-de machine intelligence, pattern analysis, machine learning, data minin-g and other courses.According to the speed of development of this dis-cipline, although the content of the teaching materials of the specialize-d courses has been updated very rapidly, we still have to sigh over the rapid development of the intelligent science and technology disciplines.This constantly challenges teachers who are engaged in the first line of intelligent science and technology teaching.

  Neural network depth learning is one of the latest and hottest advances in intelligent science and technology, and its starting point is to construct a neural network with a typical two-layer structure.Since 2006, professor Geoffrey Hington of the university of Toronto, Canada, has been a master of neural networks in Science in the United States. The magazine published the issue of deep learning milestones entitled “Reducing the Dimensionality of Data with Neural Networks”. Since the beginning of the paper, there has been a breakthrough in deep learning, which is destined to affect our generation of teachers engaged in intelligent science and technology teaching and scientific research.We should seize the opportunity to meet the challenges and advance the new development in this area of intelligence.

  Beijing university of posts and telecommunications computer college has a bachelor’s degree in intelligent science and technology as well as a corresponding master’s degree and doctor’s degree. At the same time take on the teaching work of undergraduate and graduate students’ intelligent courses. In 2013, the author has proposed in the literature that the latest achievement of deep learning be introduced into the specialized courses of intelligent science and technology. Including its necessity and feasibility and preliminary implementation recommendations.

1 Deep learning background

  The basic starting point of deep learning is to construct and implement a multilayered learning neural network, which has a deep historical origin. In the early 1980s, with multilayer sensing and reverse propagation algorithms as the focal point, researchers in neural networks saw the great potential of neural networks. Although it is inevitable to try to construct a multilayer neural structure with more than two layers of structure from the research results of biological neural network and cognitive neurology, most of them end up in failure. There are historical limitations and cognitive limitations.
  At that time, the computer speed and the development level of hardware storage equipment were not high, and it was very difficult to construct large scale neural networks and carry out effective training. At the same time, the difficulties of reverse propagation algorithm and multilayer neural network training are biased in understanding. Until 2006, professor Hinton and his doctoral student, Salakhutdinov, proposed the use of layers of unsupervised pre-training with limited Boltzmann machines. The traditional reverse propagation algorithm is used to train the effective method of multilayer neural network, which becomes a fuse in the study of deep learning, which quickly leads to a system.

  From doubt to doubt is an important characteristic in the history of deep learning. Since the introduction of intensive learning in 2006 by professor Hinton, The battle was fought by professor Jitendra Malik, a computer vision guru from the university of California, Berkeley. Although deep learning broke the test record on recognition’s hand-written digital task which name is MINIST, there was no knowledge of the performance of deep learning on the problem of large-scale visual recognition.Professor hington’s team has embraced the challenge of attending Stanford university in 2012.Recognition evaluation of large-scale image of ImageNet ILSVRC by professor FeiLi and others.The task consists of 1.2 million high-resolution pictures and 1,000 analogies.Professor Hinton’s team used the structure of a seven-layer convoluted neural network to make a surprising breakthrough, reducing recognition’s error rate from 26.2% to 15.3%.There is no doubt that the performance of deep learning will go down in history from this moment on.The skeptics of the effects of deep learning have to face the fact that deep learning is really a sword.

  In the academic field, the study of intensive learning spread from neural networks to many fields, including the international conference on machine learning (ICML), the conference on neural information processing (NIPS),Computer vision Congress (ICCV), acoustic speech and signal processing Congress (ICASSP), computational linguistics Congress (ACL), computer vision and mode recognition (CVPR),Multi-media conferences (MM) and other topics with deep learning and seminars.At home, in June 2013, the magazine of programmers interviewed three experts in the field of machine learning in China, Zhou Zhihua, a professor at Nanjing university. Li hang, chief researcher at the Noah’s ark laboratory of Huawei technology Co., Ltd., and Zhu Jun, deputy researcher at Tsinghua university, discussed their views on deep learning. They unanimously affirm the contribution of deep learning in the field of machine learning and expect it in the long term. At the ICML conference held in Beijing in 2014, in addition to hington, Another leader in deep learning, Yoshio Yoshio, and several researchers in other fields.

  Industry is very interested in the application of deep learning.In November 2012, Microsoft publicly demonstrated a fully automated simultaneous interpretation system in Tianjin, the key supporting technology of which is deep learning.In January 2013, Mr. Li Yanhong, chief executive of Baidu, announced the establishment of the Institute of deep learning.In March 2013, Google acquired a company founded by Geoffrey Hinton, founder of deep learning;In December 2013, Facebook set up an artificial intelligence laboratory and appointed professor LeCunYann, a leader in depth learning at New York university, as its director.In January 2014, Google acquired deep learning
start-ups for $400 million which name is DeepMind Technologies;In may 2014, Baidu announced the appointment of “father of Google’s brain” AndrewNg as the chief scientist of Baidu’s research Institute.From the interest of academic and industrial circles in deep learning, deep learning has become a hot spot in machine learning and mode, recognition and even in the field of intelligence.

2 Teaching suggestion

In addition, teachers should take full account of the students’ knowledge background and learning characteristics, and choose appropriate teaching content according to their own characteristics and interests.In addition, teachers must distinguish between undergraduate and graduate students in the teaching of neural network depth in order to achieve the desired results.

2.1 For Undergraduates

  For undergraduates, the teaching goal of neural network deep learning is to make students understand the basic starting point of deep learning and grasp the basic content and initial application of deep learning.To understand how deep learning can become a breakthrough in intelligent science and technology, and to stimulate students’ interest in the research of deep learning on neural networks.In view of the above objectives, we suggest that there is no need for a separate intensive neural network learning course, but that a number of teaching modules be added to the intelligent classes of senior undergraduates.As a rule, the hours of specialized courses in the senior grade are limited (usually 32 to 36 hours), and we suggest that we can set up depth learning links between 4 and 6 hours.

  Based on the above considerations, the teaching content mainly includes two main parts: multi-layer sensor and classic backpass algorithm, self-encoder and unsupervised feature learning.In short, we can first explain the limitations of the single neural cell model and learning algorithm and its representation ability, and then lead to the neural network model of multi-layer sensor and the classic back learning algorithm.In unit 2, we introduce the concept of unsupervised feature learning, the neural network model of the automatic encoder and the learning algorithm to let students understand the relationship between the automatic encoder and the main component analysis method.It is used to initialize the parameters of the depth neural network model, adjust the weight process as a whole, and understand the relationship between the depth learning and the representation learning.In addition, if time permits, the teacher can arrange a lecture on the limited Boltzmann machine random model and random learning algorithm in the third unit. In particular, a random approximation of the contrast dispersion algorithm and the limited Boltzmann machine for the deep neural network learning process. Finally, as an example of practical application, teachers can use these deep neural network models in the tasks of handwritten digital recognition. At the same time, it is possible to compare the backtracking algorithm using multi-layer sensor, and ask students to write the corresponding program and write the experimental report.

  In general, students are required to master the basic content of neural network depth learning through the above content. To further enable students to think about how to understand the necessity of the deep neural network from the perspective of biological neural network and human cognition, how to construct and learn the deep learning network and how to use the deep structure for vision, How to treat deep learning and original machine learning methods such as flow learning, probability graph model, The direct relationship between energy model and depth learning and other disciplines.According to the above questions, teachers appropriately guide students to think and consult the literature, thus stimulating students’ interest in deep learning.

2.2 In Postgraduate Teaching

  In postgraduate teaching, we must consider some issues, such as the possibility of offering a more independent neural network course and a special teaching course.Many students from non-computer studies have not even been exposed to intelligent science and technology.Graduate students, especially doctoral students, are transformed from knowledge learning to knowledge creation. Therefore, the goal of our program is to make students understand the basic starting point of deep learning, master the main content of deep learning and the typical application of a field. To stimulate students to have a strong interest in the study of neural network depth, and put forward research proposals in areas of interest to them.

  Based on the above objectives, as distinct from the teaching arrangements for undergraduate students, we recommend postgraduate courses for the characteristics of postgraduate education (for example, in 36 studies).The main contents include neural network and machine learning knowledge, depth neural network foundation, depth neural network paper reading list 3.The basic knowledge of neural network and machine learning (2 hours) mainly enables students to make a smooth transition from zero starting point to this course.Teachers explain the main objectives of machine learning and linear classifier, main component analysis and linear discriminant analysis, neural network basic units and so on.Part 2 explains the contents of undergraduate teaching arrangements (6 hours);Part 3, students read the essay section.For the third part of the students’ reading and interpretation, we further divided it into four units, with the theme mainly including limited Polzman machine and its extension, automatic encoder and extension,Deep structure model, computer vision, typical application of speech and language processing.Due to the differences in research and teaching interests, we did not give a detailed list of papers for each unit, which left more space for the teaching profession.

  In addition to the teaching links, we also emphasize the use of what we have learned in the course of our graduate students’ teaching, which includes two aspects. One is an experimental project, which includes the basic model of deep learning, the experiment of the automatic encoder and the limited Boltzmann machine model and training algorithm, the experiment of the depth neural network model and the experiment of the training algorithm. More important is how to use the learned neural network depth learning method to solve possible problems.In the end, two to three students will be formed into a group to give them a period of time, asking them to put forward their own questions of explanation, and by communicating with the teachers, the students group will determine a research topic. Write research proposals, complete the study at a later time, write the study report and report the research in groups at the final hours of the course.

  Teachers also need to pay attention to improving students’ learning research initiative.It is also easy to learn in depth.For example, it is easy for students to have access to the top conference courses mentioned in the background section;Some of the recent technical lectures sponsored by the Chinese computer society involved intensive learning.The home page of professor hinton, founder of intensive learning, is rich in information; The coursera website offers a free neural network course taught by professor hinton;Online tutorials provided by professor andrewng at Stanford university are also excellent materials;The paper by professor bengio of the university of Montreal, learningdep archetecuresforai, is a good entry point in this regard.These can fully mobilize students’ initiative and greatly improve the quality of teaching.

  All in all, students can master the main contents of neural network deep learning through classroom teaching, programming practice and project research.To further understand the research motivation of deep learning and learn how to use the deep learning structure to solve the problems of visual, speech and language applications.From a deeper level, we should think about how to construct and learn a deep learning network and how these models relate to traditional machine learning models such as flow learning, probability graph model and energy model.These teaching content is helpful for students to change from simple knowledge learning to innovation consciousness.

In conclusion
  In conclusion, we discuss how to effectively carry out the teaching of deep learning in undergraduate and postgraduate courses, including the guiding ideology, content organization, teaching methods, etc.It is expected that students will be able to reach the forefront of this subject early, enhance their interest in intelligent science and technology, and stimulate their innovative spirit.Deep learning is still at a high development stage, and it is still immature, and there are still many contents in dispute.Play the role of promoting the teaching of deep learning.

后记:个人能力有限,若有错误之处,欢迎指正!

这篇关于Exploring the teaching of deep learning in neural networks的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/230149

相关文章

MonoHuman: Animatable Human Neural Field from Monocular Video 翻译

MonoHuman:来自单目视频的可动画人类神经场 摘要。利用自由视图控制来动画化虚拟化身对于诸如虚拟现实和数字娱乐之类的各种应用来说是至关重要的。已有的研究试图利用神经辐射场(NeRF)的表征能力从单目视频中重建人体。最近的工作提出将变形网络移植到NeRF中,以进一步模拟人类神经场的动力学,从而动画化逼真的人类运动。然而,这种流水线要么依赖于姿态相关的表示,要么由于帧无关的优化而缺乏运动一致性

简单的Q-learning|小明的一维世界(3)

简单的Q-learning|小明的一维世界(1) 简单的Q-learning|小明的一维世界(2) 一维的加速度世界 这个世界,小明只能控制自己的加速度,并且只能对加速度进行如下三种操作:增加1、减少1、或者不变。所以行动空间为: { u 1 = − 1 , u 2 = 0 , u 3 = 1 } \{u_1=-1, u_2=0, u_3=1\} {u1​=−1,u2​=0,u3​=1}

简单的Q-learning|小明的一维世界(2)

上篇介绍了小明的一维世界模型 、Q-learning的状态空间、行动空间、奖励函数、Q-table、Q table更新公式、以及从Q值导出策略的公式等。最后给出最简单的一维位置世界的Q-learning例子,从给出其状态空间、行动空间、以及稠密与稀疏两种奖励函数的设置方式。下面将继续深入,GO! 一维的速度世界 这个世界,小明只能控制自己的速度,并且只能对速度进行如下三种操作:增加1、减

A Comprehensive Survey on Graph Neural Networks笔记

一、摘要-Abstract 1、传统的深度学习模型主要处理欧几里得数据(如图像、文本),而图神经网络的出现和发展是为了有效处理和学习非欧几里得域(即图结构数据)的信息。 2、将GNN划分为四类:recurrent GNNs(RecGNN), convolutional GNNs,(GCN), graph autoencoders(GAE), and spatial–temporal GNNs(S

Deep Ocr

1.圈出内容,文本那里要有内容.然后你保存,并'导出数据集'. 2.找出deep_ocr_recognition_training_workflow.hdev 文件.修改“DatasetFilename := 'Test.hdict'” 310行 write_deep_ocr (DeepOcrHandle, BestModelDeepOCRFilename) 3.推理test.hdev

OpenSNN推文:神经网络(Neural Network)相关论文最新推荐(九月份)(一)

基于卷积神经网络的活动识别分析系统及应用 论文链接:oalib简介:  活动识别技术在智能家居、运动评估和社交等领域得到广泛应用。本文设计了一种基于卷积神经网络的活动识别分析与应用系统,通过分析基于Android搭建的前端采所集的三向加速度传感器数据,对用户的当前活动进行识别。实验表明活动识别准确率满足了应用需求。本文基于识别的活动进行卡路里消耗计算,根据用户具体的活动、时间以及体重计算出相应活

Complex Networks Package for MatLab

http://www.levmuchnik.net/Content/Networks/ComplexNetworksPackage.html 翻译: 复杂网络的MATLAB工具包提供了一个高效、可扩展的框架,用于在MATLAB上的网络研究。 可以帮助描述经验网络的成千上万的节点,生成人工网络,运行鲁棒性实验,测试网络在不同的攻击下的可靠性,模拟任意复杂的传染病的传

Convolutional Neural Networks for Sentence Classification论文解读

基本信息 作者Yoon Kimdoi发表时间2014期刊EMNLP网址https://doi.org/10.48550/arXiv.1408.5882 研究背景 1. What’s known 既往研究已证实 CV领域著名的CNN。 2. What’s new 创新点 将CNN应用于NLP,打破了传统NLP任务主要依赖循环神经网络(RNN)及其变体的局面。 用预训练的词向量(如word2v

【机器学习】生成对抗网络(Generative Adversarial Networks, GANs)详解

🌈个人主页: 鑫宝Code 🔥热门专栏: 闲话杂谈| 炫酷HTML | JavaScript基础 ​💫个人格言: "如无必要,勿增实体" 文章目录 生成对抗网络(Generative Adversarial Networks, GANs)详解GANs的基本原理GANs的训练过程GANs的发展历程GANs在实际任务中的应用小结 生成对

Learning Memory-guided Normality for Anomaly Detection——学习记忆引导的常态异常检测

又是一篇在自编码器框架中研究使用记忆模块的论文,可以看做19年的iccv的论文的衍生,在我的博客中对19年iccv这篇论文也做了简单介绍。韩国人写的,应该是吧,这名字听起来就像。 摘要abstract 我们解决异常检测的问题,即检测视频序列中的异常事件。基于卷积神经网络的异常检测方法通常利用代理任务(如重建输入视频帧)来学习描述正常情况的模型,而在训练时看不到异常样本,并在测试时使用重建误