书签 分享 收藏 举报 版权申诉 / 58
上传文档赚钱

类型深学习综述讨论简介deepLearning课件.pptx

  • 上传人(卖家):晟晟文业
  • 文档编号:5148680
  • 上传时间:2023-02-15
  • 格式:PPTX
  • 页数:58
  • 大小:10MB
  • 【下载声明】
    1. 本站全部试题类文档,若标题没写含答案,则无答案;标题注明含答案的文档,主观题也可能无答案。请谨慎下单,一旦售出,不予退换。
    2. 本站全部PPT文档均不含视频和音频,PPT中出现的音频或视频标识(或文字)仅表示流程,实际无音频或视频文件。请谨慎下单,一旦售出,不予退换。
    3. 本页资料《深学习综述讨论简介deepLearning课件.pptx》由用户(晟晟文业)主动上传,其收益全归该用户。163文库仅提供信息存储空间,仅对该用户上传内容的表现方式做保护处理,对上传内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知163文库(点击联系客服),我们立即给予删除!
    4. 请根据预览情况,自愿下载本文。本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
    5. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007及以上版本和PDF阅读器,压缩文件请下载最新的WinRAR软件解压。
    配套讲稿:

    如PPT文件的首页显示word图标,表示该PPT已包含配套word讲稿。双击word图标可打开word文档。

    特殊限制:

    部分文档作品中含有的国旗、国徽等图片,仅作为作品整体效果示例展示,禁止商用。设计者仅对作品中独创性部分享有著作权。

    关 键  词:
    学习 综述 讨论 简介 deepLearning 课件
    资源描述:

    1、Outline Conception of deep learning Development history Deep learning frameworks Deep neural network architectures Convolutional neural networks Introduction Network structure Training tricks Application in Aesthetic Image Evaluation Idea Deep Learning(Hinton,2006)Deep learning is a branch of machin

    2、e learning based on a set of algorithms that attempt to model high level abstractions in data.The advantage of deep learning is to extracting features automatically instead of extracting features manually.Computer vision Speech recognition Natural language processingDevelopment History19431940 1950

    3、1960 1970 1980 1990 2000 2010 MP model1958Single-layerPerceptron1969XORproblem1986BP algorithm1989CNN-LeNet1995 1997SVMLSTMGradient disappearance problem19912006DBNReLU2011 2012 2015DropoutAlexNetBNFaster R-CNNResidualNetGeoffrey HintonW.PittsRosenblattMarvin MinskyYann LeCunHintonHintonHintonLeCunB

    4、engio3 convolution layers+2 pooling layers+1 fully connected layer+1 output layerAdvantages:RNN aims to process the sequence data.Convolution layerX_test,y_test=data2DenseLayer),def load_dataset():Simplified calculationreshape(-1,1,28,28)Neural networkverbose=1,Hidden units and visible unitsDeep Bel

    5、ief Networks(DBN)A multi-scene deep learning model for image aesthetic evaluationJ.Simplified calculationDeep Learning FrameworksDeep neural network architectures Deep Belief Networks(DBN)Recurrent Neural Networks(RNN)Generative Adversarial Networks(GANs)Convolutional Neural Networks(CNN)Long Short-

    6、Term Memory(LSTM)DBN(Deep Belief Network,2006)Hidden units and visible units Each unit is binary(0 or 1).Every visible unit connects to all the hidden units.Every hidden unit connects to all the visible units.There are no connections between v-v and h-h.Hinton G E.Deep belief networksJ.Scholarpedia,

    7、2009,4(6):5947.Fig1.RBM(restricted Boltzmann machine)structure.Fig2.DBN(deep belief network)structure.Idea?Composed of multiple layers of RBM.How to we train these additional layers?Unsupervised greedy approachRNN(Recurrent Neural Network,2013)What?RNN aims to process the sequence data.RNN will reme

    8、mber the previous information and apply it to the calculation of the current output.That is,the nodes of the hidden layer are connected,and the input of the hidden layer includes not only the output of the input layer but also the output of the hidden layer.Marhon S A,Cameron C J F,Kremer S C.Recurr

    9、ent Neural NetworksM/Handbook on Neural Information Processing.Springer Berlin Heidelberg,2013:29-65.Applications?Machine TranslationGenerating Image DescriptionsSpeech RecognitionHow to train?BPTT(Back propagation through time)GANs(Generative Adversarial Networks,2014)GANs Inspired by zero-sum Game

    10、 in Game Theory,which consists of a pair of networks-a generator network and a discriminator network.The generator network generates a sample from the random vector,the discriminator network discriminates whether a given sample is natural or counterfeit.Both networks train together to improve their

    11、performance until they reach a point where counterfeit and real samples can not be distinguished.Goodfellow I,Pouget-Abadie J,Mirza M,et al.Generative adversarial netsC/Advances in neural information processing systems.2014:2672-2680.Applacations:Image editingImage to image translationGenerate textG

    12、enerate images based on textCombined with reinforcement learningAnd moreLong Short-Term Memory(LSTM,1997)Neural NetworksNeuronNeural networkConvolutional Neural Networks(CNN)Convolution neural network is a kind of feedforward neural network,which has the characteristics of simple structure,less trai

    13、ning parameters and strong adaptability.CNN avoids the complex pre-processing of image(etc.extract the artificial features),we can directly input the original image.Basic components:Convolution Layers,Pooling Layers,Fully connected LayersConvolution layer The convolution kernel translates on a 2-dim

    14、ensional plane,and each element of the convolution kernel is multiplied by the element at the corresponding position of the convolution image and then sum all the product.By moving the convolution kernel,we have a new image,which consists of the sum of the product of the convolution kernel at each p

    15、osition.local receptive fieldweight sharingReduced the number of parametersPooling layerPooling layer aims to compress the input feature map,which can reduce the number of parameters in training process and the degree of over-fitting of the model.Max-pooling:Selecting the maximum value in the poolin

    16、g window.Mean-pooling:Calculating the average of all values in the pooling window.Fully connected layer and Softmax layerEach node of the fully connected layer is connected to all the nodes of the last layer,which is used to combine the features extracted from the front layers.Fig1.Fully connected l

    17、ayer.Fig2.Complete CNN structure.Fig3.Softmax layer.Training and Testing Forward propagation -Taking a sample(X,Yp)from the sample set and put the X into the network;-Calculating the corresponding actual output Op.Back propagation -Calculating the difference between the actual output Op and the corr

    18、esponding ideal output Yp;-Adjusting the weight matrix by minimizing the error.Training stage:Testing stage:Putting different images and labels into the trained convolution neural network and comparing the output and the actual value of the sample.Before the training stage,we should use some differe

    19、nt small random numbers to initialize weights.CNN Structure EvolutionHinton BPNeocognitionLeCunLeNetAlexNetHistorical breakthroughReLUDropoutGPU+BigDataVGG16VGG19MSRA-NetDeeper networkNINGoogLeNetInception V3Inception V4R-CNNSPP-NetFast R-CNNFaster R-CNNInception V2(BN)FCNFCN+CRFSTNetCNN+RNN/LSTMRes

    20、NetEnhanced the functionality of the convolution moduleClassification taskDetection taskAdd new functional unitintegration19801998198920142015ImageNetILSVRC(ImageNet Large Scale Visual Recognition Challenge)20132014201520152014,2015201520122015BN(Batch Normalization)RPNLeNet(LeCun,1998)LeNet is a co

    21、nvolutional neural network designed by Yann LeCun for handwritten numeral recognition in 1998.It is one of the most representative experimental systems in early convolutional neural networks.LeNet includes the convolution layer,pooling layer and full-connected layer,which are the basic components of

    22、 modern CNN network.LeNet is considered to be the beginning of the CNN.network structure:3 convolution layers+2 pooling layers+1 fully connected layer+1 output layerHaykin S,Kosko B.GradientBased Learning Applied to Document RecognitionD.Wiley-IEEE Press,2009.AlexNet(Alex,2012)Network structure :5 c

    23、onvolution layers+3 fully connected layers The nonlinear activation function:ReLU(Rectified linear unit)Methods to prevent overfitting:Dropout,Data Augmentation Big Data Training:ImageNet-image database of million orders of magnitude Others:GPU,LRN(local response normalization)layerKrizhevsky A,Suts

    24、kever I,Hinton G E.ImageNet classification with deep convolutional neural networksC/International Conference on Neural Information Processing Systems.Curran Associates Inc.2012:1097-1105.Basic components:Convolution Layers,Pooling Layers,Fully connected LayersDropout consists of setting to zero the

    25、output of each hidden neuron with probability p.IEEE Transactions on Multimedia,2015,17(11):2021-2034.max_epochs=10,arXiv preprint arXiv:1602.weight sharing(dropout1,layers.2012:1097-1105.Ioffe S,Szegedy C.trained wellAdd new functional unitX_train,y_train,X_val,y_val,X_test,y_test=load_dataset()CNN

    26、 avoids the complex pre-processing of image(etc.Applacations:1Marhon S A,Cameron C J F,Kremer S C.Overfeat(2013)Sermanet P,Eigen D,Zhang X,et al.OverFeat:Integrated Recognition,Localization and Detection using Convolutional NetworksJ.Eprint Arxiv,2013.VGG-Net(Oxford University,2014)input:a fixed-siz

    27、e 224*224 RGB imagefilters:a very small receptive field-3*3,with stride 1Max-pooling:2*2 pixel window,with stride 2Fig1.Architecture of VGG16Table 1:ConvNet configurations(shown in columns).The convolutional layer parameters are denoted as“conv-”Simonyan K,Zisserman A.Very Deep Convolutional Network

    28、s for Large-Scale Image RecognitionJ.Computer Science,2014.Why 3*3 filters?Stacked conv.layers have a large receptive fieldMore non-linearityLess parameters to learnNetwork-in-Network(NIN,Shuicheng Yan,2013)Network structure:4 Mlpconv layers+Global average pooling layerFig 1.linear convolution MLP c

    29、onvolutionFig 2.fully connected layer global average pooling layerMin Lin et al,Network in Network,Arxiv 2013.Fig 3.NIN structure Linear combination of multiple feature maps.Information integration of cross-channel.Reduced the parameters Reduced the network Avoided over-fittingGoogLeNet(Inception V1

    30、,2014)Fig1.Inception module,nave versionProposed inception architecture and optimized itCanceled the fully connnected layerUsed auxiliary classifiers to accelerate network convergenceSzegedy C,Liu W,Jia Y,et al.Going deeper with convolutionsC/Proceedings of the IEEE Conference on Computer Vision and

    31、 Pattern Recognition.2015:1-9.Fig2.Inception module with dimension reductionsFig3.GoogLeNet network(22 layers)Inception V2(2015)Ioffe S,Szegedy C.Batch normalization:Accelerating deep network training by reducing internal covariate shiftJ.arXiv preprint arXiv:1502.03167,2015.Inception V3(2015)Szeged

    32、y C,Vanhoucke V,Ioffe S,et al.Rethinking the inception architecture for computer visionC/Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:2818-2826.ResNet(Kaiwen He,2015)A simple and clean framework of training “very”deep networks.State-of-the-art performance forIma

    33、ge classificationObject detectionSemantic Segmentationand moreHe K,Zhang X,Ren S,et al.Deep Residual Learning for Image RecognitionJ.2015:770-778.Fig1.Shortcut connectionsFig2.ResNet structure(152 layers)FractalNetInception V4(2015)Szegedy C,Ioffe S,Vanhoucke V,et al.Inception-v4,inception-resnet an

    34、d the impact of residual connections on learningJ.arXiv preprint arXiv:1602.07261,2016.Inception-ResNetHe K,Zhang X,Ren S,et al.Deep Residual Learning for Image RecognitionJ.2015:770-778.ComparisonSqueezeNet SqueezeNet:AlexNet-level accuracy with 50 x fewer parameters and 0.5MB model sizeXceptionR-C

    35、NN(2014)Region proposals:Selective SearchResize the region proposal:Warp all region proposals to the required size(227*227,AlexNet Input)Compute CNN feature:Extract a 4096-dimensional feature vector from each region proposal using AlexNet.Classify:Training a linear SVM classifier for each class.1Uij

    36、lings J R R,Sande K E A V D,Gevers T,et al.Selective Search for Object RecognitionJ.International Journal of Computer Vision,2013,104(2):154-171.2Girshick R,Donahue J,Darrell T,et al.Rich Feature Hierarchies for Accurate Object Detection and Semantic SegmentationJ.2014:580-587.R-CNN:Region proposals

    37、+CNNSPP-Net(Spatial pyramid pooling network,2015)He K,Zhang X,Ren S,et al.Spatial Pyramid Pooling in Deep Convolutional Networks for Visual RecognitionJ.IEEE Transactions on Pattern Analysis&Machine Intelligence,2015,37(9):1904-1916.Fig2.A network structure with a spatial pyramid pooling layer.Fig1.

    38、Top:A conventional CNN.Bottom:Spatial pyramid pooling network structure.Advantages:Get the feature map of the entire image to save much time.Output a fixed length feature vector with inputs of arbitrary sizes.Extract the feature of different scale,and can express more spatial information.The SPP-Net

    39、 method computes a convolutional feature map for the entire input image and then classifies each object proposal using a feature vector extracted from the shared feature map.Fast R-CNN(2015)A Fast R-CNN network takes an entire image and a set of object proposals as input.The network processes the en

    40、tire image with several convolutional(conv)and max pooling layers to produce a conv feature map.For each object proposal,a region of interest(RoI)pooling layer extracts a fixed-length feature vector from the feature map.Each feature vector is fed into a sequence of fully connected layers that finall

    41、y branch into two sibling output layers.Girshick R.Fast r-cnnC/Proceedings of the IEEE International Conference on Computer Vision.2015:1440-1448.Faster R-CNN(2015)Faster R-CNN=RPN+Fast R-CNN A Region Proposal Network(RPN)takes an image(of any size)as input and outputs a set of rectangular object pr

    42、oposals,each with an objectness score.Ren S,He K,Girshick R,et al.Faster r-cnn:Towards real-time object detection with region proposal networksC/Advances in neural information processing systems.2015:91-99.Figure 1.Faster R-CNN is a single,unified network for object detection.Figure 2.Region Proposa

    43、l Network(RPN).Training tricks Data Augmentation Dropout ReLU Batch NormalizationData Augmentation-rotation-flip-zoom-shift-scale-contrast-noise disturbance-color-.Dropout(2012)Dropout consists of setting to zero the output of each hidden neuron with probability p.The neurons which are“dropped out”i

    44、n this way do not contribute to the forward backpropagation and do not participate in backpropagation.Enhanced the functionality of the convolution moduleupdate_learning_rate=0.NIN structureSemantic SegmentationProposed inception architecture and optimized itSzegedy C,Ioffe S,Vanhoucke V,et al.Gradi

    45、entBased Learning Applied to Document RecognitionD.X_train=X_train.local receptive fieldSzegedy C,Liu W,Jia Y,et al.Reduced the networkimshow(X_train00,cmap=cm.Inception-v4,inception-resnet and the impact of residual connections on learningJ.That is,the nodes of the hidden layer are connected,and th

    46、e input of the hidden layer includes not only the output of the input layer but also the output of the hidden layer.Wang W,Zhao M,Wang L,et al.ReLU(Rectified Linear Unit)advantagesrectified Simplified calculation Avoided gradient disappearedBatch Normalization(2015)In the input of each layer of the

    47、network,insert a normalized layer.For a layer with d-dimensional input x=(x(1).x(d),we will normalize each dimension:Ioffe S,Szegedy C.Batch normalization:Accelerating deep network training by reducing internal covariate shiftJ.arXiv preprint arXiv:1502.03167,2015.Internal Covariate ShiftApplication

    48、 in Aesthetic Image Evaluation Dong Z,Shen X,Li H,et al.Photo Quality Assessment with DCNN that Understands Image WellM/MultiMedia Modeling.Springer International Publishing,2015:524-535.Lu X,Lin Z,Jin H,et al.Rating image aesthetics using deep learningJ.IEEE Transactions on Multimedia,2015,17(11):2

    49、021-2034.Wang W,Zhao M,Wang L,et al.A multi-scene deep learning model for image aesthetic evaluationJ.Signal Processing Image Communication,2016,47:511-518.Photo Quality Assessment with DCNN that Understands Image WellDCNN_Aesthtrained well network a two-class SVM classifierDCNN_Aesth_SPoriginal ima

    50、gessegmented images spatial pyramidImageNetCUHKAVADong Z,Shen X,Li H,et al.Photo Quality Assessment with DCNN that Understands Image WellM/MultiMedia Modeling.Springer International Publishing,2015:524-535.Rating image aesthetics using deep learningSupport heterogeneous inputs,i.e.,global and local

    展开阅读全文
    提示  163文库所有资源均是用户自行上传分享,仅供网友学习交流,未经上传用户书面授权,请勿作他用。
    关于本文
    本文标题:深学习综述讨论简介deepLearning课件.pptx
    链接地址:https://www.163wenku.com/p-5148680.html

    Copyright@ 2017-2037 Www.163WenKu.Com  网站版权所有  |  资源地图   
    IPC备案号:蜀ICP备2021032737号  | 川公网安备 51099002000191号


    侵权投诉QQ:3464097650  资料上传QQ:3464097650
       


    【声明】本站为“文档C2C交易模式”,即用户上传的文档直接卖给(下载)用户,本站只是网络空间服务平台,本站所有原创文档下载所得归上传人所有,如您发现上传作品侵犯了您的版权,请立刻联系我们并提供证据,我们将在3个工作日内予以改正。

    163文库