书签 分享 收藏 举报 版权申诉 / 50
上传文档赚钱

类型Feature-Selection-for-Ranking-数据挖掘课件.ppt

  • 上传人(卖家):晟晟文业
  • 文档编号:5191619
  • 上传时间:2023-02-16
  • 格式:PPT
  • 页数:50
  • 大小:510KB
  • 【下载声明】
    1. 本站全部试题类文档,若标题没写含答案,则无答案;标题注明含答案的文档,主观题也可能无答案。请谨慎下单,一旦售出,不予退换。
    2. 本站全部PPT文档均不含视频和音频,PPT中出现的音频或视频标识(或文字)仅表示流程,实际无音频或视频文件。请谨慎下单,一旦售出,不予退换。
    3. 本页资料《Feature-Selection-for-Ranking-数据挖掘课件.ppt》由用户(晟晟文业)主动上传,其收益全归该用户。163文库仅提供信息存储空间,仅对该用户上传内容的表现方式做保护处理,对上传内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知163文库(点击联系客服),我们立即给予删除!
    4. 请根据预览情况,自愿下载本文。本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
    5. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007及以上版本和PDF阅读器,压缩文件请下载最新的WinRAR软件解压。
    配套讲稿:

    如PPT文件的首页显示word图标,表示该PPT已包含配套word讲稿。双击word图标可打开word文档。

    特殊限制:

    部分文档作品中含有的国旗、国徽等图片,仅作为作品整体效果示例展示,禁止商用。设计者仅对作品中独创性部分享有著作权。

    关 键  词:
    Feature Selection for Ranking 数据 挖掘 课件
    资源描述:

    1、Feature Selection for Ranking ABSTRACT The reality is that many feature selection methods used in classification are directly applied to ranking We argue that because of the striking differences between ranking and classification,it is better to develop different feature selection methods for rankin

    2、g.In this paper we propose a new feature selection methodDefine the ranking accuracy in terms of a performance measure or a loss function as the importance of the feature and the correlation between the ranking results of two features as the similarity between them.We also demonstrate how to solve t

    3、he optimization problem in an efficient way.We have tested the effectiveness of our feature selection method on two information retrieval datasets and with two ranking model 1.INTRODUCTION Ranking SVM and RankNet First,feature selection can help enhance accuracy in many machine learning problems,whi

    4、ch strongly indicates that feature selection is also necessary for ranking Second,feature selection can also help improve the efficiency of training.In information retrieval,especially in web search,usually the data size is very large and thus training of ranking models is computationally costly.The

    5、re have been no methods of feature selection dedicatedly proposed for ranking.Most of the methods used in ranking were developed for classification.Feature selection methods in classification fall into three categorie The first category,which is named filter,feature selection is defined as a preproc

    6、essing step and can be independent from learning.that information gain(IG)and chi-square(CHI)are among the most effective methods of feature selection for classification.The second category referred to as wrapperutilizes the learning system as a black box to scoresubsets of features The third catego

    7、ry called the embeddedmethod performs feature selection withinthe process of training.Second,the evaluation measures(e.g.meanaverage precision(MAP)and normalizeddiscounted cumulative gain(NDCG)used inranking problems are different from thosemeasures used in classification 1)in ranking usually precis

    8、ion is more important than recall while in classification both precision and recall are important;Several problems may arise when applying the feature selection methods to ranking First:existing feature selection methods for classification are not suitable for ranking.In ranking,a number of ordered

    9、categories are used,representing the ranking relationship between instances,while in classification the categories are“flat”2)in ranking correctly ranking the top-n instances is more critical while in classification making a correct classification decision is of equal significance for all instances.

    10、The following are the properties of the novel method for feature selection in ranking 1)The method makes use of ranking information,instead of simply viewing the ranks as flat categories.2)Inspired by the work in 11427,it considers the similarities between features,and tries to avoid selecting redun

    11、dant features.3)It models feature selection for ranking as a multi-objective optimization problem.The final objective is to find a set of features with maximum importance and minimum similarity.4)It provides a greedy search algorithm to solve the optimization problem.The corresponding solution produ

    12、ced is proven to be equivalent to the optimal solution to the original problem under certain condition.2.FEATURE SELECTIONMETHOD Overview Suppose the goal is to select(1tm)features fromthe entire feature set 1,2,.In our method we firstdefine the importance score of each feature,and definethe similar

    13、ity between any two features and.Then weemploy an efficient algorithm to maximize the totalimportance scores and minimize the total similarityscores of a set of features.2.2 Importance of feature We first assign an importance score to each feature.Specifically,we propose using an evaluation measure

    14、like MAP and NDCG or a loss function(e.g.pair-wise ranking errors 1013)to compute the importance score.In the former,we first rank instances using the feature,evaluate the performance in terms of the measure,and then take the evaluation result as the importance score.In the latter,we also rank insta

    15、nces using the feature,and then view a score inversely proportional to the corresponding loss as the importance score 2.3 Similarity between features In this work,we measure the similarity between any two features on the basis of their ranking results.That is,we regard each feature as a ranking mode

    16、l,and the similarity between two features is represented by the similarity between the ranking results that they produce.we choose Kendalls as an example.The Kendalls value of query q for any two features and can be calculated as follows,2.4 Optimization formulation we want to select those features

    17、with largest total importance scores and smallest total similarity scores.Mathematically,this can be represented as follows:In(1),there are two objectives:to maximize the sum ofthe importance scores of individual features,and tominimize the sum of similarity scores between any twofeatures.we take a

    18、common approach in optimizationand convert multi-objective programming to singleobjective programming using linear combination.2.5 Solution to optimization problem The optimization in(2)is a typical 0-1 integer programming problem.One possible approach would be to perform exhaustive search.However,t

    19、he time complexity of it,(),is too high to make it applicable in real applications.We need to look for more practical solutions.In this work,we propose a greedy search algorithm for tackling the issueAlgorithm GAS(Greedy search Algorithm of feature Selection)1.Construct an undirected graph G0,in whi

    20、ch each node represents a feature,the weight of node is and the weight of an edge between node and node is 2.Construct a set S to contain the selected features.Initially S0=.3.For i=1t,(1)Select the node with the largest weight,without loss of generality,suppose that the selected node is(2)A punishm

    21、ent is conducted on all the other nodes according to their similarities with .That is,the weights of all the other nodes are updated as follows.(3)Add to the set S and remove it from graph G together with all the edges connected to it:4.Output St.Theorem 1:With the greedy search algorithm in Fig.1 o

    22、ne can find the optimal solution to problem(2),provided that ,where denotes the selected feature set with Proof:The condition indicates that when selecting the(t+1)-th feature,we do not change the already-selected t features.Denote=|i=1,t,where is the ki-th feature selected in the i-th iteration.The

    23、n the task turns out to be that of finding the(t+1)-th feature so that the following objective can be met.Since ,we can rewrite(3)as And since and =|i=1,t,(4)equals 3.EXPERIMENT SETTINGS 3.1 Datasets In our experiments,we used two benchmark datasets.1)The first dataset is the.gov data which was used

    24、 in the topic distillation task of Web track of TREC 2004 28.2)The second dataset is the OHSUMED data 9,which was used in many experiments in information retrieval 610,including the TREC-9 filtering track 26.3.2 Evaluation measures 3.2.1 Mean average precision(MAP)MAP is a measure on precision of ra

    25、nking results It is assumed that there are two types of documents:positive and negative(relevant and irrelevant).Precision at n measures the accuracy of top n results for a query.Average precision of a query is calculated basedon precision at n:3.2.2 Normalized discount cumulative gain(NDCG)NDCG is

    26、designed for measuring rankingaccuracies when there are multiple levels ofrelevance judgment.Given a query,NDCG atposition n in is defined 3.3 Ranking model3.3.1 Ranking SVM Ranking SVM makes an extension of SVM to ranking;in contrast to traditional SVM which works on instances,Ranking SVM utilizes

    27、instance pairs and theirpreference labels in training.The optimizationformulation of Ranking SVM is as follows:3.3.2 RankNet RankNet also uses instance pairs in training.it employsa neural network as the ranking function and relativeentropy as loss function.Let be the estimated posteriorprobability

    28、and be the“true”posterior probability,and let .The loss for an instance pair inRankNet is defined as 3.4 Algorithms for comparison Our proposed algorithm has two variants.We listthem in the following table.For comparison,we selected IG and CHI as thebaselines.IG measures the reduction in uncertainty

    29、(entropy)in classification prediction when knowingthe feature.CHI measures the degree ofindependence between the feature and thecategories.we also used“With All Features(WAF)”as another baseline,in order to show thebenefit of conducting feature selection.4.EXPERIMENTAL RESULTS 4.1 The.gov data Fig.2

    30、 shows the performances of the feature selection methods on the.gov dataset when they work as preprocessors of Ranking SVM.Fig.3 shows the performances when using RankNet as the ranking model.In the figures,the x-axis represents the number of selected features.Experimental results indicate that in m

    31、ost cases GAS-L can outperform GAS-E,although not significantly Experimental results also indicate that with GAS-L and GAS-E as feature selection methods the ranking performances of Ranking SVM are more stable than those with IG and CHI as feature selection methods.4.2 OHSUMED data Fig.4 shows the r

    32、esults of different feature selectionmethods on the OHSUMED dataset when they work aspreprocessors of Ranking SVM Fig.5 shows the results of different feature selectionmethods on the OHSUMED dataset when they work aspreprocessors of RankNet4.3 Discussions From the results of the two datasets,we made

    33、 thefollowing observations:1)Feature selection can improve the ranking performance more significantly for the.gov dataset than for the OHSUMED dataset.2)Our proposed algorithms outperform IG and CHI more significantly for the.gov dataset than for the OHSUMED dataset.To figure out the reasons,we cond

    34、ucted thefollowing additional experiments.We first plotted the importance of each feature inthe two datasets in Fig.6.The x-axis representsfeatures and the y-axis represents their MAPvalues when they are regarded as ranking models.Furthermore,we plotted the similarity between anytwo features(in term

    35、s of Kendalls)in the twodatasets in Fig.7.Here,both x-axis and y-axisrepresent features,and the level of darknessrepresents the strength of similarity(the darker,the more similar).Based on the discussions above,we conclude thatif the effects of features vary largely and there areredundant features,o

    36、ur method can work very well.When applying our method in practice,therefore,one can first test the two aspects.5.CONCLUSIONS AND FUTURE WORK In this paper,we have proposed an optimization method for feature selection in ranking.he contributions of this paper include the following points.1)We have di

    37、scussed the differences between classification and ranking,and made clear the limitations of the existing feature selection methods when applied to ranking.2)We have proposed a novel method to select features for ranking,in which the problem is formalized as an optimization issue.3)We have evaluated

    38、 the proposed method using two public datasets,with two ranking models,and in terms of a number of evaluation measures.Experimental results have validated the effectiveness and efficiency of the proposed method.feature selection for ranking is an important research topic,for which there are still ma

    39、ny open problems that need to be addressed.1)In this paper,we have used measures such as MAP and NDCG to compute the importance of a feature and used measures such as Kendalls to compute the similarity between features.2)In this paper we have only given a greedy search algorithm for the optimization

    40、,which can guarantee to find the optimal solution of the integer programming problem under certain condition.one can expect an improvement on ranking performance over those reported in this paper.3)There are two objectives in our optimization method for feature selection.In this paper,we have combin

    41、ed them linearly for simplicity.In principle,one could employ other ways to represent the tradeoff between the two objective 4)We have demonstrated the effectiveness of our method with two datasets,and with a small number of manually extracted features.It is necessary to further conduct experiments on larger datasets and with more features.

    展开阅读全文
    提示  163文库所有资源均是用户自行上传分享,仅供网友学习交流,未经上传用户书面授权,请勿作他用。
    关于本文
    本文标题:Feature-Selection-for-Ranking-数据挖掘课件.ppt
    链接地址:https://www.163wenku.com/p-5191619.html

    Copyright@ 2017-2037 Www.163WenKu.Com  网站版权所有  |  资源地图   
    IPC备案号:蜀ICP备2021032737号  | 川公网安备 51099002000191号


    侵权投诉QQ:3464097650  资料上传QQ:3464097650
       


    【声明】本站为“文档C2C交易模式”,即用户上传的文档直接卖给(下载)用户,本站只是网络空间服务平台,本站所有原创文档下载所得归上传人所有,如您发现上传作品侵犯了您的版权,请立刻联系我们并提供证据,我们将在3个工作日内予以改正。

    163文库