深度学习的可视分析.pptx
- 【下载声明】
1. 本站全部试题类文档,若标题没写含答案,则无答案;标题注明含答案的文档,主观题也可能无答案。请谨慎下单,一旦售出,不予退换。
2. 本站全部PPT文档均不含视频和音频,PPT中出现的音频或视频标识(或文字)仅表示流程,实际无音频或视频文件。请谨慎下单,一旦售出,不予退换。
3. 本页资料《深度学习的可视分析.pptx》由用户(无敌的果实)主动上传,其收益全归该用户。163文库仅提供信息存储空间,仅对该用户上传内容的表现方式做保护处理,对上传内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知163文库(点击联系客服),我们立即给予删除!
4. 请根据预览情况,自愿下载本文。本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
5. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007及以上版本和PDF阅读器,压缩文件请下载最新的WinRAR软件解压。
- 配套讲稿:
如PPT文件的首页显示word图标,表示该PPT已包含配套word讲稿。双击word图标可打开word文档。
- 特殊限制:
部分文档作品中含有的国旗、国徽等图片,仅作为作品整体效果示例展示,禁止商用。设计者仅对作品中独创性部分享有著作权。
- 关 键 词:
- 深度 学习 可视 分析
- 资源描述:
-
1、Visual Analytics of Deep LearningDeep learning2Motivation专家TrainingProcessCat(prob=0.93)Why the output is like this?When does the model fail?When does the model work?When I can trust the model?Training dataDL ModelModeloutputThe development of high-quality deep models typicallyrelies on a substantia
2、l amount of trial-and-error3Explainable Deep Learning专家Cat, becauseFurExplainabledeep+learningEarUnderstand the model outputUnderstand when it worksUnderstand when it failsUnderstand why it failsTraining dataExplainable Visual analyticsmodelsof deep modelsExplain the model outputsby statistical appr
3、oaches,e.g., ICML 2017 best paperVisualize workings of the model,support interactive exploration,e.g., VAST 2017 best paper4Overview: In Academia Hot topic201620177*920187*6Deep learning specific 2Other models / generic 2Total41613* The best paper in VAST 2017* Honorable mention in VAST 2018Data is
4、collected only in IEEE VIS5Overview: In Industry Google TensorBoard AutoML Microsoft CustomVision Machine learning service Machine learning studio IBM Visual Recognition Facebook FB Learner6TensorBoardVisualizing LearningGraph VisualizationHistogram Dashboard7Outline Par t 1: Model Par t 2: Training
5、 Par t 3: Dataset Par t 4: Cost function Nearly all deep learning algorithms can be described asparticular instances of a fairly simple recipe: combine aspecification of a dataset, a cost function, an optimizationprocedure, and a model.From the “Deep Learning” book8Par t 1: Model Models are deeper a
6、nd deeper AlexNet: 7 layers, 2013 VGG: 19 layers, 2015 ResNet: 101 layers, 2017 Structures are more complex No longer chain-like, but with shortcuts Challenge Efficient mining approach to support real time interaction Visual clutter in showing model structure, neurons in each layer9Analyzing the Noi
7、se Robustness of DeepNeural NetworksMengchen Liu, Shixia Liu, Hang Su, Kelei Cao, Jun ZhuVAST 201810Adversarial Examples Intentionally designed to mislead a deep neural network (DNN)into making incorrect prediction.Giant pandaDNNGuenon monkey11Datapath extractionDatapath visualization1212Datapath Ex
8、traction - MotivationNeuron Current me Most acti ProblemMost activated Misleadinrecogniza Reason Neurons Gap betwNeuron 1 Neuron 213Datapath Extraction - Formulation The critical neurons for a prediction: the neurons thathighly contributed to the final prediction Subset selection Keep the original p
9、rediction by selecting a minimized subset ofneuronsN: all neuronsNs: neuron subsetp(): prediction Extend to a set of images X14Datapath Extraction - Solution Directly solving: time-consuming NP-complete Large search space due to the large number of layersand neurons in a CNNDivide-and-conquer-baseds
10、earch space reductionQuadratic approximationAn accurate approximation in smaller search space15Datapath Extraction Search Space Reduction Original problem: 57.78 million dimsNetwork: ResNet-101Split into layers2k 1.44 million dimsSplit into feature mapsA neuron64 2k dims16Neurons in a feature mapNeu
11、rons in a layerDatapath Extraction Quadratic ApproximationStill NPReformulatewhether the j-th featuremap in layer i is criticalppQ j,k = (a )(a )Discrete to continuousjakaiijk1. Bridges the gapNeeds to calculatebetween activation andprediction2. Each element in Qapproximately modelsthe interaction b
12、etweenfeature map j andfeature map kby BP in each iterationTaylor decomposition: activation vector of thej-th feature mapQuadratic optimization17Datapath Extraction - Motivation Current method Most activated neurons ProblemMost activated Misleading results when exists a highlyrecognizable secondary
13、objectLearned feature ReasonActivations Neurons have complex interactions Gap between activation and predictionNeuron 1 Neuron 218Why?19202121212421252126212721282129Feature map cluster, color:A(normal) A(adversarial)Euler-diagram-basedlayout to presentShared feature mapsfeature maps in a layerUniqu
展开阅读全文