书签 分享 收藏 举报 版权申诉 / 22
上传文档赚钱

类型NLP自然语言处理—N-gramlanguagemodel课件.ppt

  • 上传人(卖家):晟晟文业
  • 文档编号:5218457
  • 上传时间:2023-02-17
  • 格式:PPT
  • 页数:22
  • 大小:200.97KB
  • 【下载声明】
    1. 本站全部试题类文档,若标题没写含答案,则无答案;标题注明含答案的文档,主观题也可能无答案。请谨慎下单,一旦售出,不予退换。
    2. 本站全部PPT文档均不含视频和音频,PPT中出现的音频或视频标识(或文字)仅表示流程,实际无音频或视频文件。请谨慎下单,一旦售出,不予退换。
    3. 本页资料《NLP自然语言处理—N-gramlanguagemodel课件.ppt》由用户(晟晟文业)主动上传,其收益全归该用户。163文库仅提供信息存储空间,仅对该用户上传内容的表现方式做保护处理,对上传内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知163文库(点击联系客服),我们立即给予删除!
    4. 请根据预览情况,自愿下载本文。本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
    5. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007及以上版本和PDF阅读器,压缩文件请下载最新的WinRAR软件解压。
    配套讲稿:

    如PPT文件的首页显示word图标,表示该PPT已包含配套word讲稿。双击word图标可打开word文档。

    特殊限制:

    部分文档作品中含有的国旗、国徽等图片,仅作为作品整体效果示例展示,禁止商用。设计者仅对作品中独创性部分享有著作权。

    关 键  词:
    NLP 自然语言 处理 gramlanguagemodel 课件
    资源描述:

    1、1CS 388:Natural Language Processing:N-Gram Language ModelsRaymond J.MooneyUniversity of Texas at AustinLanguage Models Formal grammars(e.g.regular,context free)give a hard“binary”model of the legal sentences in a language.For NLP,a probabilistic model of a language that gives a probability that a st

    2、ring is a member of a language is more useful.To specify a correct probability distribution,the probability of all sentences in a language must sum to 1.Uses of Language Models Speech recognition“I ate a cherry”is a more likely sentence than“Eye eight uh Jerry”OCR&Handwriting recognition More probab

    3、le sentences are more likely correct readings.Machine translation More likely sentences are probably better translations.Generation More likely sentences are probably better NL generations.Context sensitive spelling correction“Their are problems wit this sentence.”Completion Prediction A language mo

    4、del also supports predicting the completion of a sentence.Please turn off your cell _ Your program does not _ Predictive text input systems can guess what you are typing and give choices on how to complete it.N-Gram Models Estimate probability of each word given prior context.P(phone|Please turn off

    5、 your cell)Number of parameters required grows exponentially with the number of words of prior context.An N-gram model uses only N1 words of prior context.Unigram:P(phone)Bigram:P(phone|cell)Trigram:P(phone|your cell)The Markov assumption is the presumption that the future behavior of a dynamical sy

    6、stem only depends on its recent history.In particular,in a kth-order Markov model,the next state only depends on the k most recent states,therefore an N-gram model is a(N1)-order Markov model.N-Gram Model Formulas Word sequences Chain rule of probability Bigram approximation N-gram approximationnnww

    7、w.11)|()|().|()|()()(111112131211knkknnnwwPwwPwwPwwPwPwP)|()(1111kNknkknwwPwP)|()(111knkknwwPwPEstimating Probabilities N-gram conditional probabilities can be estimated from raw text based on the relative frequency of word sequences.To have a consistent probabilistic model,append a unique start()an

    8、d end()symbol to every sentence and treat these as additional words.)()()|(111nnnnnwCwwCwwP)()()|(111111nNnnnNnnNnnwCwwCwwPBigram:N-gram:Generative Model&MLE An N-gram model can be seen as a probabilistic automata for generating sentences.Relative frequency estimates can be proven to be maximum like

    9、lihood estimates(MLE)since they maximize the probability that the model M will generate the training corpus T.Initialize sentence with N1 symbolsUntil is generated do:Stochastically pick the next word based on the conditional probability of each word given the previous N 1 words.)(|(argmaxMTPExample

    10、 from Textbook P(i want english food)=P(i|)P(want|i)P(english|want)P(food|english)P(|food)=.25 x.33 x.0011 x.5 x.68=.000031 P(i want chinese food)=P(i|)P(want|i)P(chinese|want)P(food|chinese)P(|food)=.25 x.33 x.0065 x.52 x.68=.00019Train and Test Corpora A language model must be trained on a large c

    11、orpus of text to estimate good parameter values.Model can be evaluated based on its ability to predict a high probability for a disjoint(held-out)test corpus(testing on the training corpus would give an optimistically biased estimate).Ideally,the training(and test)corpus should be representative of

    12、the actual application data.May need to adapt a general model to a small amount of new(in-domain)data by adding highly weighted small corpus to original training data.Unknown Words How to handle words in the test corpus that did not occur in the training data,i.e.out of vocabulary(OOV)words?Train a

    13、model that includes an explicit symbol for an unknown word().Choose a vocabulary in advance and replace other words in the training corpus with.Replace the first occurrence of each word in the training data with.Evaluation of Language Models Ideally,evaluate use of model in end application(extrinsic

    14、,in vivo)Realistic Expensive Evaluate on ability to model test corpus(intrinsic).Less realistic Cheaper Verify at least once that intrinsic evaluation correlates with an extrinsic one.Perplexity Measure of how well a model“fits”the test data.Uses the probability that the model assigns to the test co

    15、rpus.Normalizes for the number of words in the test corpus and takes the inverse.NNwwwPWPP).(1)(21 Measures the weighted average branching factor in predicting the next word(lower is better).Sample Perplexity Evaluation Models trained on 38 million words from the Wall Street Journal(WSJ)using a 19,9

    16、79 word vocabulary.Evaluate on a disjoint set of 1.5 million WSJ words.UnigramBigramTrigramPerplexity962170109Smoothing Since there are a combinatorial number of possible word sequences,many rare(but not impossible)combinations never occur in training,so MLE incorrectly assigns zero to many paramete

    17、rs(a.k.a.sparse data).If a new combination occurs during testing,it is given a probability of zero and the entire sequence gets a probability of zero(i.e.infinite perplexity).In practice,parameters are smoothed(a.k.a.regularized)to reassign some probability mass to unseen events.Adding probability m

    18、ass to unseen events requires removing it from seen ones(discounting)in order to maintain a joint distribution that sums to 1.Laplace(Add-One)Smoothing“Hallucinate”additional training data in which each possible N-gram occurs exactly once and adjust estimates accordingly.where V is the total number

    19、of possible(N1)-grams(i.e.the vocabulary size for a bigram model).VwCwwCwwPnnnnn)(1)()|(111VwCwwCwwPnNnnnNnnNnn)(1)()|(111111Bigram:N-gram:Tends to reassign too much mass to unseen events,so can be adjusted to add 01(normalized by V instead of V).Advanced Smoothing Many advanced techniques have been

    20、 developed to improve smoothing for language models.Good-Turing Interpolation Backoff Kneser-Ney Class-based(cluster)N-gramsModel Combination As N increases,the power(expressiveness)of an N-gram model increases,but the ability to estimate accurate parameters from sparse data decreases(i.e.the smooth

    21、ing problem gets worse).A general approach is to combine the results of multiple N-gram models of increasing complexity(i.e.increasing N).Interpolation Linearly combine estimates of N-gram models of increasing order.Learn proper values for i by training to(approximately)maximize the likelihood of an

    22、 independent development(a.k.a.tuning)corpus.)()|()|()|(3121,211,2nnnnnnnnnwPwwPwwwPwwwPInterpolated Trigram Model:1iiWhere:Backoff Only use lower-order model when data for higher-order model is unavailable(i.e.count is zero).Recursively back-off to weaker models until data is available.otherwise)|(

    23、)(1)(if)|(*)|(121111111nNnnkatznNnnNnnNnnnNnnkatzwwPwwCwwPwwPWhere P*is a discounted probability estimate to reserve mass for unseen events and s are back-off weights(see text for details).A Problem for N-Grams:Long Distance Dependencies Many times local context does not provide the most useful pred

    24、ictive clues,which instead are provided by long-distance dependencies.Syntactic dependencies“The man next to the large oak tree near the grocery store on the corner is tall.”“The men next to the large oak tree near the grocery store on the corner are tall.”Semantic dependencies“The bird next to the

    25、large oak tree near the grocery store on the corner flies rapidly.”“The man next to the large oak tree near the grocery store on the corner talks rapidly.”More complex models of language are needed to handle such dependencies.Summary Language models assign a probability that a sentence is a legal st

    26、ring in a language.They are useful as a component of many NLP systems,such as ASR,OCR,and MT.Simple N-gram models are easy to train on unsupervised corpora and can provide useful estimates of sentence likelihood.MLE gives inaccurate parameters for models trained on sparse data.Smoothing techniques adjust parameter estimates to account for unseen(but not impossible)events.

    展开阅读全文
    提示  163文库所有资源均是用户自行上传分享,仅供网友学习交流,未经上传用户书面授权,请勿作他用。
    关于本文
    本文标题:NLP自然语言处理—N-gramlanguagemodel课件.ppt
    链接地址:https://www.163wenku.com/p-5218457.html

    Copyright@ 2017-2037 Www.163WenKu.Com  网站版权所有  |  资源地图   
    IPC备案号:蜀ICP备2021032737号  | 川公网安备 51099002000191号


    侵权投诉QQ:3464097650  资料上传QQ:3464097650
       


    【声明】本站为“文档C2C交易模式”,即用户上传的文档直接卖给(下载)用户,本站只是网络空间服务平台,本站所有原创文档下载所得归上传人所有,如您发现上传作品侵犯了您的版权,请立刻联系我们并提供证据,我们将在3个工作日内予以改正。

    163文库