书签 分享 收藏 举报 版权申诉 / 28
上传文档赚钱

类型学习视频课件RNN training.pptx

  • 上传人(卖家):晟晟文业
  • 文档编号:5204348
  • 上传时间:2023-02-17
  • 格式:PPTX
  • 页数:28
  • 大小:1.05MB
  • 【下载声明】
    1. 本站全部试题类文档,若标题没写含答案,则无答案;标题注明含答案的文档,主观题也可能无答案。请谨慎下单,一旦售出,不予退换。
    2. 本站全部PPT文档均不含视频和音频,PPT中出现的音频或视频标识(或文字)仅表示流程,实际无音频或视频文件。请谨慎下单,一旦售出,不予退换。
    3. 本页资料《学习视频课件RNN training.pptx》由用户(晟晟文业)主动上传,其收益全归该用户。163文库仅提供信息存储空间,仅对该用户上传内容的表现方式做保护处理,对上传内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知163文库(点击联系客服),我们立即给予删除!
    4. 请根据预览情况,自愿下载本文。本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
    5. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007及以上版本和PDF阅读器,压缩文件请下载最新的WinRAR软件解压。
    配套讲稿:

    如PPT文件的首页显示word图标,表示该PPT已包含配套word讲稿。双击word图标可打开word文档。

    特殊限制:

    部分文档作品中含有的国旗、国徽等图片,仅作为作品整体效果示例展示,禁止商用。设计者仅对作品中独创性部分享有著作权。

    关 键  词:
    学习视频课件RNN training 学习 视频 课件 RNN
    资源描述:

    1、Training Recurrent Neural NetworkHung-yi LeeGoalx1x2x3y1y2y3WiWhWoinitWhWhWiWoWiWo0Backpropagation through time(BPTT)Review:BackpropagationlixlijlilijxzwzwCCForward PassBackward Pass11llza 11lTlllWz1211llllbaWz yzxLCL111lxlajlj12j12ilijwLayerlLayer1l111bxWz 11za LTLLLWz11liError signallizReview:Back

    2、propagationli12n1Cyx Lz1 Lz2 Lnz2yCxnxyCLayer L21L1z1mLayer L-1TWL yxCL1-L1L2z1LmzBackward Pass 11lTlllWz yzxLCL LTLLLWz11lixlijlilijxzwzwCCError signalanBackpropagation through TimexnynA very deep neural networkoutput:yn Input:init,x1,x2,xnUNFOLD:xn-1an-1xn-2x1a1initan-2 ynCny1nCnnyC2nnyC3anBackpro

    3、pagation through TimexnynA very deep neural networkoutput:yn Input:init,x1,x2,xnUNFOLD:xn-1an-1xn-2x1a1initan-2 ynC123 nz1 nz2 nz312311nz12nz13nzanBackpropagation through TimexnynA very deep neural networkoutput:yn Input:init,x1,x2,xnUNFOLD:xn-1an-1xn-2x1a1initan-2 ynCanxnynxn-1an-1xn-2x1a1Some weig

    4、hts are shared.initan-2jikikjijikjBackpropagation through TimeA very deep neural networkoutput:yn Input:init,x1,x2,xnUNFOLD:Initialize w1,w2 by the same value(The values of w1,w2 should always be the same.)the same memorypointerpointerBPTT x1x2x3y1y2y3initx4y4Backward Pass:Forward Pass:Compute a1,a2

    5、,a3,a4 a1a2a3a4Unfortunately,it is not easy to train RNN.The error surface is rough.w1w2CostSource:http:/jmlr.org/proceedings/papers/v28/pascanu13.pdfThe error surface is either very flat or very steep.If n=1000:Toy Example101wy101wy201wy301wynCost Cn1111n=10n=100n=1000Only extremely large and small

    6、 valueanBackpropagation through Timexnynxn-1an-1xn-2x1a1initan-2Gradient Vanishing/Exploding For simplicity,assume linear activation function Gradient Vanishing/Exploding 05101520253035-100-10-1-0.1-0.0100.010.111010005101520253035-100-10-1-0.1-0.0100.010.111010005101520253035-100-10-1-0.1-0.0100.01

    7、0.111010005101520253035-100-10-1-0.1-0.0100.010.111010005101520253035-100-10-1-0.1-0.0100.010.111010005101520253035-100-10-1-0.1-0.0100.010.11101001 step2 steps5 steps10 steps20 steps50 stepsPossible SolutionsClipped Gradientw1w2Cost Clipped gradienttheano.tensor.clip(x,min,max)Source:http:/jmlr.org

    8、/proceedings/papers/v28/pascanu13.pdfSource:http:/www.cs.toronto.edu/fritz/absps/momentum.pdfGradient descentMomentum Nesterovs Accelerated Gradient(NAG)NAGMethods:ValleyNAG Momentum Nesterovs Accelerated Gradient(NAG)MovementGradientLast MovementGradient=0Gradient=0RMSPropLarger Learning RateSmalle

    9、r Learning RateReview:AdagradUse first derivative to estimate second derivativeRMSPropError Surface can be even more complex when training RNN.Larger Learning RateSmaller Learning RateRMSPropRoot Mean Square of the gradients with previous gradients being decayed x1x2+Input4 times of parametersLSTM c

    10、an address the gradient vanishing problem.LSTMxtzzizfzoytxt+1zzizfzoyt+1ht+1Extension:“peephole”htLSTMxtzzizfzoytxt+1zzizfzoyt+1ht+1htabab11X WTX WTConstant Error Carrousel(CEC)Other Simpler Variants GRU:Cho,Kyunghyun,et al.Learning Phrase Representations using RNN EncoderDecoder for Statistical Mac

    11、hine Translation“,EMNLP,2014 SCRN:Mikolov,Tomas,et al.Learning longer memory in recurrent neural networks“,ICLR 2015Better Initialization Vanilla RNN:Initialized with Identity matrix+ReLU Quoc V.Le,Navdeep Jaitly,Geoffrey E.Hinton,“A Simple Way to Initialize Recurrent Networks of Rectified Linear Units“,2015Concluding Remarks Be careful when training RNN Possible solutions:Clipping the gradients Advanced optimization technology NAG RMSprop Try LSTM(or other simpler variants)Better initialization

    展开阅读全文
    提示  163文库所有资源均是用户自行上传分享,仅供网友学习交流,未经上传用户书面授权,请勿作他用。
    关于本文
    本文标题:学习视频课件RNN training.pptx
    链接地址:https://www.163wenku.com/p-5204348.html

    Copyright@ 2017-2037 Www.163WenKu.Com  网站版权所有  |  资源地图   
    IPC备案号:蜀ICP备2021032737号  | 川公网安备 51099002000191号


    侵权投诉QQ:3464097650  资料上传QQ:3464097650
       


    【声明】本站为“文档C2C交易模式”,即用户上传的文档直接卖给(下载)用户,本站只是网络空间服务平台,本站所有原创文档下载所得归上传人所有,如您发现上传作品侵犯了您的版权,请立刻联系我们并提供证据,我们将在3个工作日内予以改正。

    163文库