书签 分享 收藏 举报 版权申诉 / 79
上传文档赚钱

类型模式识别讲座PPT课件.ppt

  • 上传人(卖家):三亚风情
  • 文档编号:2670727
  • 上传时间:2022-05-17
  • 格式:PPT
  • 页数:79
  • 大小:2MB
  • 【下载声明】
    1. 本站全部试题类文档,若标题没写含答案,则无答案;标题注明含答案的文档,主观题也可能无答案。请谨慎下单,一旦售出,不予退换。
    2. 本站全部PPT文档均不含视频和音频,PPT中出现的音频或视频标识(或文字)仅表示流程,实际无音频或视频文件。请谨慎下单,一旦售出,不予退换。
    3. 本页资料《模式识别讲座PPT课件.ppt》由用户(三亚风情)主动上传,其收益全归该用户。163文库仅提供信息存储空间,仅对该用户上传内容的表现方式做保护处理,对上传内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知163文库(点击联系客服),我们立即给予删除!
    4. 请根据预览情况,自愿下载本文。本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
    5. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007及以上版本和PDF阅读器,压缩文件请下载最新的WinRAR软件解压。
    配套讲稿:

    如PPT文件的首页显示word图标,表示该PPT已包含配套word讲稿。双击word图标可打开word文档。

    特殊限制:

    部分文档作品中含有的国旗、国徽等图片,仅作为作品整体效果示例展示,禁止商用。设计者仅对作品中独创性部分享有著作权。

    关 键  词:
    模式识别 讲座 PPT 课件
    资源描述:

    1、1Pattern RecognitionNanyang Technological UniversityDr. Shi, DamingHarbin Engineering University标题添加点击此处输入相关文本内容点击此处输入相关文本内容总体概述点击此处输入相关文本内容标题添加点击此处输入相关文本内容3What is Pattern RecognitionnClassify raw data into the category of the pattern.nA branch of artificial intelligence concerned with the identifi

    2、cation of visual or audio patterns by computers. For example character recognition, speech recognition, face recognition, etc.nTwo categories: syntactic (or structural) pattern recognition and statistical pattern recognitionIntroductionPattern Recognition= Pattern Classification45What is Pattern Rec

    3、ognitionTraining PhaseTraining dataUnknown dataFeature ExtractionLearning (Feature selection, clustering, discriminant function generation, grammar parsing) Recognition (statistical, structural)ResultsRecognition PhaseKnowledge6What is Pattern RecognitionTraining PhaseTraining dataUnknown dataFeatur

    4、e ExtractionLearning (Feature selection, clustering, discriminant function generation, grammar parsing) Recognition (statistical, structural)ResultsRecognition PhaseKnowledge7CategorisationnBased on Application AreasnFace RecognitionnSpeech RecognitionnCharacter Recognitionnetc, etcnBased on Decisio

    5、n Making ApproachesnSyntactic Pattern RecognitionnStatistical Pattern RecognitionIntroduction8Syntactic Pattern RecognitionAny problem is described with formal language, and the solution is obtained through grammatical parsingIn Memory of Prof. FU, King-Sun and Prof. Shu WenhaoIntroduction9Statistic

    6、al Pattern RecognitionIn the statistical approach, each pattern is viewed as a point in a multi-dimensional space. The decision boundaries are determined by the probability distribution of the patterns belonging to each class, which must either be specified or learned.Introduction10Scope of the Semi

    7、narnModule 1 Distance-Based ClassificationnModule 2 Probabilistic ClassificationnModule 3 Linear Discriminant AnalysisnModule 4 Neural Networks for P.R.nModule 5 ClusteringnModule 6 Feature SelectionIntroduction11Module 1 Distance-Based ClassificationNanyang Technological UniversityDr. Shi, DamingHa

    8、rbin Engineering UniversityPattern Recognition12OverviewnDistance based classification is the most common type of pattern recognition techniquenConcepts are a basis for other classification techniquesnFirst, a prototype is chosen through training to represent a classnThen, the distance is calculated

    9、 from an unknown data to the class using the prototypeDistance-Based Classification13Classification by distanceObjects can be represented by vectors in a space.In training, we have the samples:In recognition, an unknown data is classified by distance:Distance-Based Classification14PrototypenTo find

    10、the pattern-to-class distance, we need to use a class prototype (pattern):(1) Sample Mean. For class ci,(2) Most Typical Sample. chooseSuch thatis minimized.Distance-Based Classification15Prototype Nearest Neighbour(3) Nearest Neighbour. chooseSuch thatis minimized.Nearest neighbour prototypes are s

    11、ensitive to noise and outliers in the training set.Distance-Based Classification16Prototype k-NN(4) k-Nearest Neighbours. K-NN is more robust against noise, but is more computationally expensive.The pattern y is classified in the class of its k nearest neighbours from the training samples. The chose

    12、n distance determines how near is defined.Distance-Based Classification17Distance MeasuresnMost familiar distance metric is the Euclidean distancenAnother example is the Manhattan distance:nMany other distance measures Distance-Based Classification18Minimum Euclidean Distance (MED) ClassifierEquival

    13、ently,19Decision BoundaryGiven a prototype and a distance metric, it is possible to find the decision boundary between classes.Linear boundaryNonlinear boundaryDecision Boundary = Discriminant FunctionDistance-Based Classificationlightnesslengthlightnesslength20ExampleDistance-Based Classification21

    14、ExampleAny fish is a vector in the 2-dimensional space of width and lightness.21xxfishDistance-Based Classificationlightnesslength22ExampleDistance-Based Classification23SummarynClassification by the distance from an unknown data to class prototypes.nChoosing prototype:nSample MeannMost Typical Samp

    15、lenNearest NeighbournK-Nearest NeighbournDecision Boundary = Discriminant FunctionDistance-Based Classification24Module 2 Probabilistic ClassificationNanyang Technological UniversityDr. Shi, DamingHarbin Engineering UniversityPattern Recognition25Review and Extend26Maximum A Posterior (MAP) Classifi

    16、ernIdeally, we want to favour the class with the highest probability for the given pattern:Where P(Ci|x) is the a posterior probability of class Ci given x27Bayesian ClassificationnBayes Theoreom:Where P(x|Ci) is the class conditional probability density (p.d.f), which needs to be estimated from the

    17、 available samples or otherwise assumed.Where P(Ci) is a priori probability of class Ci.Probabilistic Classification28MAP ClassifiernBayesian Classifier, also known as MAP ClassifierSo, assign the pattern x to the class with maximum weighted p.d.f.Probabilistic Classification29Accuracy VS. RiskHowev

    18、er, in the real world, life is not just about accuracy. In some cases, a small misclassification may result in a big disaster. For example, medical diagnosis, fraud detection.The MAP classifier is biased towards the most likely class. maximum likelihood classification.Probabilistic Classification30L

    19、oss FunctionOn the other hand, in the case of P(C1) P(C2), the lowest error rate can be attained by always classifying as C1A solution is to assign a loss to misclassification.which leads to Also known as the problem of imbalanced training data.Probabilistic Classification31Conditional RiskInstead o

    20、f using the likelihood P(Ci|x), we use conditional riskcost of action i given class j To minimize overall risk, choose the action with the lowest risk for the pattern:Probabilistic Classification32Conditional RiskProbabilistic Classification33ExampleAssuming that the amount of fraudulent activity is

    21、 about1% of the total credit card activity:C1 = Fraud P(C1) = 0.01C2 = No fraud P(C2) = 0.99If losses are equal for misclassification, then:Probabilistic Classification34ExampleHowever, losses are probably not the same. Classifying a fraudulent transaction as legitimate leads to direct dollar losses

    22、 as well as intangible losses (e.g. reputation, hassles for consumers).Classifying a legitimate transaction as fraudulent inconveniences consumers, as their purchases are denied. This could lead to loss of future business.Lets assume that the ratio of loss for not fraud to fraud is 1 to 50, i.e., A

    23、missed fraud is 50 times more expensive than accidentally freezing a card due to legitimate use.Probabilistic Classification35ExampleBy including the loss function, the decision boundaries change significantly.Instead of We use Probabilistic Classification36Probability Density FunctionRelatively spe

    24、aking, its much easy to estimate a priori probability, e.g. simply takekkiiNNCP)(To estimate p.d.f., we can(1) Assume a known p.d.f, and estimate its parameters (2) Estimate the non-parametric p.d.f from training samplesProbabilistic Classification37Maximum Likelihood Parameter EstimationnWithout th

    25、e loss of generality, we consider Gaussian density.P(x|Ci) =Training examples for class CiParameter values to be identifiedWe are looking forthat maximize the likelihood, so,The sample covariance matrix!38Density Estimationnif we do not know the specific form of the p.d.f., then we need a different

    26、density estimation approach which is a non-parametric technique that uses variations of histogram approximation.(1) Simplest density estimation is to use “bins”. e.g., in 1-D case, take the x-axis and divide into bins of length h. Estimate the probability of a sample in each bin.kN is the number of

    27、samples in the bin(2) Alternatively, we can take windows of unit volume and apply these windows to each sample. The overlap of the windows defines the estimated p.d.f. This technique is known as Parzen windows or kernels.Probabilistic Classification39SummarynBayesian Theoreom nMaximum A Posterior Cl

    28、assifier = Maximum Likelihood classifernDensity EstimationProbabilistic Classification40Module 3 Linear Discriminant AnalysisNanyang Technological UniversityDr. Shi, DamingHarbin Engineering UniversityPattern Recognition41Linear Classifier - 1A linear classifier implements discriminant function or a

    29、 decision boundary represented by a straight line in the multidimensional space. Given an input, x = (x1 xm)Tthe decision boundary of a linear classifier is given by a discriminant functionWith weight vector w = (w1 wm)TbxwbxfmkkkT1)(xwLDA42Linear Classifier - 2The output of the function f (x) for a

    30、ny input will depend upon the value of weight vector and input vector. For example, the following class definition may be employed:If f (x) 0 Then x is Ballet dancerIf f (x) 0 Then x is Rugby playerLDA43Linear Classifier - 3bxfTxw)(x1x2f(x)0f(x) pel is cytoplasmIf value pel is nucleusthis is cluster

    31、ing based on density estimation. peaks = cluster centres. valleys = cluster boundariesClustering63Parameterized Density EstimationWe shall begin with parameterized p.d.f., in which the only thing that must be learned is the value of an unknown parameter vectornWe make the following assumptions:nThe

    32、samples come from a known number c of classesnThe prior probabilities P(j) for each class are known nP(x | j, j) (j = 1, ,c) are knownnThe values of the c parameter vectors 1, 2, , c are unknownClustering64Mixture DensitynThe category labels are unknown, and this density function is called a mixture

    33、 density, andnOur goal will be to use samples drawn from this mixture density to estimate the unknown parameter vector . nOnce is known, we can decompose the mixture into its components and use a MAP classifier on the derived densities.tc21c1jparameters mixingjdensities componentjj),.,( where)(P. ),

    34、|x(P)|x(P Clustering65Chinese Ying-Yang PhilosophynEverything in the universe can be viewed as a product of a constant conflict between the opposites Ying and Yang.YingnegativefemaleinvisiblepositivemalevisibleYangn The optimal status is reached if Ying-Yang achieves harmonyClustering66Bayesian Ying

    35、-Yang ClusteringnTo find a clusters y to partition input data xnx is visible but y is invisiblenx decides y in training but y decides x in runningp(x,y)=p(y|x)p(x)p(x,y)=p(x|y)p(y)xyp(,)Clustering67Bayesian Ying Yang Harmony Learning (1)nTo minimise the difference between the Ying-Yang pair:)()|(),(

    36、xpxypMMKLYangMYangYingdxdyypyxpxpxypYingYangMM)()|()()|(lndxdymxGxpxypyyyMYang),()()|(lnnTo select the optimal model (cluster number):)()(minargkJkHkkwherekyyykyyNikykJxypxypNkH1111lnln)()|(ln)|(1)(Clustering68Bayesian Ying Yang Harmony Learning (2)nParameter learning using EM algorithmnE-Step:nM-St

    37、ep:1(,)(|)(,)jjjjyyyjjikjjjjyyyyG xmP yxG xm111(,)11(|)(,)jjjjjNNyyyjnewjjyikiijjjjyyyyG xmP yxNNG xm111(|)1(|)(|)NjjjiiNjnewjjjiyiiNjjjiyiiP yx xmP yx xNP yx211(|)()NjnewjjjjyiiyjiyP yxxmNClustering69SummarynClustering by DistancenGoodness of paretitioningnK-meansnClustering by Density EstimationnB

    38、YYClustering70Module 6 Feature SelectionNanyang Technological UniversityDr. Shi, DamingHarbin Engineering UniversityPattern Recognition71MotivationFeature SelectionClassifier performance depend on a combination of the number of samples, number of features, and complexity of the classifier.Q1: The mo

    39、re samples, the better?Q2: The more features, the better?Q3: The more complex, the better?However, the number of samples is fixed when trainingBoth requires to reduce the number of features72Curse of Dimensionalityif the number of training samples is small relative to the number of features, the per

    40、formance may be degraded.Because: with the increase of the number of features, the number of unknown parameters will be increased accordingly, then the reliability of the parameter estimation decreases.Feature Selection73Ocams Razor Hypothesisor plurality should not be posited without necessity.Plur

    41、alitas non est ponenda sine neccesitate- William of Ockham (ca. 1285-1349).To make the system simpler, unnecessary features must be removed.Feature Selection74Feature SelectionIn general, we would like to have a classifier to use a minimum number of dimensions, in order to achieve:- less computation

    42、s- statistical estimation reliabilityFeature Selection:Given m measurements, choose nm best as featureWe require: A criterion to evaluate featuresAn algorithm to optimize the criterionFeature Selection75CriterionTypically, Interclass Distance (normalized by intraclass distance)2 classes:where mi1 =

    43、mean of ith feature of class 1 si1 = scatter (variance) of ith feature in class 1k classes:Feature Selection76OptimizingnChoosing n features from m measurements, the combinations are nUsually an exhaustive comparison is not feasible.nSome sub-optimal strategies include:nRank features by effectivenes

    44、s and choose bestnIncrementally add features to set of chosen featuresnSuccessively add and delete features to chosen setFeature Selection问题提问与解答问答HERE COMES THE QUESTION AND ANSWER SESSION结束语 CONCLUSION感谢参与本课程,也感激大家对我们工作的支持与积极的参与。课程后会发放课程满意度评估表,如果对我们课程或者工作有什么建议和意见,也请写在上边,来自于您的声音是对我们最大的鼓励和帮助,大家在填写评估表的同时,也预祝各位步步高升,真心期待着再次相会! 感谢聆听The user can demonstrate on a projector or computer, or print the presentation and make it into a film讲师:XXXX日期:20XX.X月

    展开阅读全文
    提示  163文库所有资源均是用户自行上传分享,仅供网友学习交流,未经上传用户书面授权,请勿作他用。
    关于本文
    本文标题:模式识别讲座PPT课件.ppt
    链接地址:https://www.163wenku.com/p-2670727.html

    Copyright@ 2017-2037 Www.163WenKu.Com  网站版权所有  |  资源地图   
    IPC备案号:蜀ICP备2021032737号  | 川公网安备 51099002000191号


    侵权投诉QQ:3464097650  资料上传QQ:3464097650
       


    【声明】本站为“文档C2C交易模式”,即用户上传的文档直接卖给(下载)用户,本站只是网络空间服务平台,本站所有原创文档下载所得归上传人所有,如您发现上传作品侵犯了您的版权,请立刻联系我们并提供证据,我们将在3个工作日内予以改正。

    163文库