ImageVerifierCode 换一换
格式:PDF , 页数:5 ,大小:164.99KB ,
资源ID:3176154      下载积分:2 金币
快捷下载
登录下载
邮箱/手机:
温馨提示:
快捷下载时,用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)。 如填写123,账号就是123,密码也是123。
特别说明:
请自助下载,系统不会自动发送文件的哦; 如果您已付费,想二次下载,请登录后访问:我的下载记录
支付方式: 支付宝    微信支付   
验证码:   换一换

加入VIP,免费下载
 

温馨提示:由于个人手机设置不同,如果发现不能下载,请复制以下地址【https://www.bdocx.com/down/3176154.html】到电脑端继续下载(重复下载不扣费)。

已注册用户请登录:
账号:
密码:
验证码:   换一换
  忘记密码?
三方登录: 微信登录   QQ登录  

下载须知

1: 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。
2: 试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。
3: 文件的所有权益归上传用户所有。
4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
5. 本站仅提供交流平台,并不能对任何下载内容负责。
6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

版权提示 | 免责声明

本文(上海交通大学神经网络原理与应用作业3.pdf)为本站会员(b****1)主动上传,冰豆网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知冰豆网(发送邮件至service@bdocx.com或直接QQ联系客服),我们立即给予删除!

上海交通大学神经网络原理与应用作业3.pdf

1、Neural Network Theory and ApplicationsHomework Assignment 3oxstarSJTUJanuary 19,20121Data PreprocessingFirst we used svm-scale of LibSVM to scale the data.There are two main advantages ofscaling:one is to avoid attributes in greater numeric ranges dominating those in smallernumeric ranges,another on

2、e is to avoid numerical difficulties during the calculation1.Welinearly scaled each attribute to the range-1,+1.2Model SelectionWe tried three different kernel functions,namely linear,polynomial and RBF.liner:K(xi,xj)=xTixj polynomial:K(xi,xj)=(xTixj+r)d,0 radial basis function(RBF):K(xi,xj)=exp(kxi

3、 xjk2),0The penalty parameter C and kernel parameters(,r,d)should be chosen.We used thegrid-search1 on C and while r and d are set to their default values:0 and 3.In Figure 1,we presents the contour maps for choosing the proper attributes.We justsearched for some maxima while the global maximum is u

4、sually difficult to find and withthe values of attributes increasing,the running time increasing dramatically.Note that ovrstands for one-versus-rest task decomposition methods while ovo is short for one-versus-oneand pvp is short for part-versus-part.The liner kernel doesnt have private attributes,

5、so we should just search for the penaltyparameter C.The results are shown in Figure 2.The final selection for each attributes are presented in Table 1.Table 1:A Selection for Each AttributesDecompositionKernelCRBF101.0one-versus-restPolynomial0.10.7Liner1RBF11.5one-versus-onePolynomial0.010.2Liner0.

6、1RBF10.1part-versus-partPolynomial0.010.4Liner11Gammalg(cost)0.20.40.60.811.21.41.61.820123840424446485052(a)RBF Kernel(ovr)Gammalg(cost)0.20.40.60.812103132333435363738394041(b)Polynomial Kernel(ovr)Gammalg(cost)0.20.40.60.811.21.41.61.8210152535455565758(c)RBF Kernel(ovo)Gammalg(cost)0.20.40.60.81

7、321038404244464850525456(d)Polynomial Kernel(ovo)Gammalg(cost)0.20.40.60.811.21.41.61.8210120253035404550(e)RBF Kernel(pvp)Gammalg(cost)0.20.40.60.811.21.4321015202530354045(f)Polynomial Kernel(pvp)Figure 1:Grid Search for RBF and Polynomial Kernel2321010510152025303540lg(cost)Accuracy(a)ovr32101201

8、02030405060lg(cost)Accuracy(b)ovo32101205101520253035404550lg(cost)Accuracy(c)pvpFigure 2:Attributes Search for Liner Kernel3Experiments3.1Task Decomposition MethodsThere are several multi-class classification techniques have been proposed for SVM models.The most typical approach for decomposing tas

9、ks is the so called one-versus-rest methodwhich classifies one class from the other class.Assume that we construct N two-class clas-sifiers,a test datum is classified to Ciiff.the ith classifier recognizes this datum.However,probably more than one classifiers recognize it.In this case,we set it belo

10、nging to Ciif theith classifier gives the largest decision value.On the other side,if no classifier recognizesit,we would set it belonging to Ciif the ith classifier gives the smallest decision value forclassifying it to the rest class.One-versus-one combining all possible two-class classifier is an

11、other methodology fordealing with multi-class problems3.The size of classifier grows super-linearly with thenumber of classes but the running time may not because each divided problem is muchsmaller.We used a election strategy to make the final decisions:if the number of i-relativeclassifiers that c

12、lassifying a datum to the ith class is the largest,we would say that thisdatum belongs to Ci.Part-versus-part method is another choice4.Any two-class problem can be further de-composed into a number of two-class sub-problems as small as needed.It is good at dealingwith unbalance classification probl

13、ems.As shown in Table 2,number of training data ineach class from our dataset is just unbalance.Table 2:Number of Training Data in Each ClassClassNumber of Training DataClassNumber of Training Data05376741994758423281545391910046891013385381143We used MAX-MIN strategy to make the final decisions.We

14、also have to determine thesize of minimum parts,which affects the performance of classification a lot.From Figure 3,we chose 200 as the number of each sub-class because the accuracy reach a local maximumand it would make no sense if 1600 is chosen.3.2ResultsIn our experiments,we used the Java versio

15、n of LibSVM2.3255010020040080016000102030405060N per partAccuracyFigure 3:Relationship between Sun-class Size and Accuracyovrovopvp0102030405060Task Decomposition MethodsAccuracy(%)RBFpolynomiallinear(a)Accuracyovrovopvp0102030405060708090100Task Decomposition MethodsTime(s)RBFpolynomiallinear(b)Run

16、ning TimeFigure 4:Performance of Each Task Decomposition Method and Each KernelThe running time and accuracy are shown in Figure 4a and Figure 4b.3.3DiscussionComparing with ovo and pvp,one-versus-one decomposition method always has the worstaccuracy no mater which kernel is used.However,due to the simple procedure,only Nclassifiers are required for a N-class problem.So the scalability of this method is betterthan the others.The one-versus-one decomposition method performed the best in our exper

copyright@ 2008-2022 冰豆网网站版权所有

经营许可证编号:鄂ICP备2022015515号-1