1、Neural Network Theory and ApplicationsHomework Assignment 3oxstarSJTUJanuary 19,20121Data PreprocessingFirst we used svm-scale of LibSVM to scale the data.There are two main advantages ofscaling:one is to avoid attributes in greater numeric ranges dominating those in smallernumeric ranges,another on
2、e is to avoid numerical difficulties during the calculation1.Welinearly scaled each attribute to the range-1,+1.2Model SelectionWe tried three different kernel functions,namely linear,polynomial and RBF.liner:K(xi,xj)=xTixj polynomial:K(xi,xj)=(xTixj+r)d,0 radial basis function(RBF):K(xi,xj)=exp(kxi
3、 xjk2),0The penalty parameter C and kernel parameters(,r,d)should be chosen.We used thegrid-search1 on C and while r and d are set to their default values:0 and 3.In Figure 1,we presents the contour maps for choosing the proper attributes.We justsearched for some maxima while the global maximum is u
4、sually difficult to find and withthe values of attributes increasing,the running time increasing dramatically.Note that ovrstands for one-versus-rest task decomposition methods while ovo is short for one-versus-oneand pvp is short for part-versus-part.The liner kernel doesnt have private attributes,
5、so we should just search for the penaltyparameter C.The results are shown in Figure 2.The final selection for each attributes are presented in Table 1.Table 1:A Selection for Each AttributesDecompositionKernelCRBF101.0one-versus-restPolynomial0.10.7Liner1RBF11.5one-versus-onePolynomial0.010.2Liner0.
6、1RBF10.1part-versus-partPolynomial0.010.4Liner11Gammalg(cost)0.20.40.60.811.21.41.61.820123840424446485052(a)RBF Kernel(ovr)Gammalg(cost)0.20.40.60.812103132333435363738394041(b)Polynomial Kernel(ovr)Gammalg(cost)0.20.40.60.811.21.41.61.8210152535455565758(c)RBF Kernel(ovo)Gammalg(cost)0.20.40.60.81
7、321038404244464850525456(d)Polynomial Kernel(ovo)Gammalg(cost)0.20.40.60.811.21.41.61.8210120253035404550(e)RBF Kernel(pvp)Gammalg(cost)0.20.40.60.811.21.4321015202530354045(f)Polynomial Kernel(pvp)Figure 1:Grid Search for RBF and Polynomial Kernel2321010510152025303540lg(cost)Accuracy(a)ovr32101201
8、02030405060lg(cost)Accuracy(b)ovo32101205101520253035404550lg(cost)Accuracy(c)pvpFigure 2:Attributes Search for Liner Kernel3Experiments3.1Task Decomposition MethodsThere are several multi-class classification techniques have been proposed for SVM models.The most typical approach for decomposing tas
9、ks is the so called one-versus-rest methodwhich classifies one class from the other class.Assume that we construct N two-class clas-sifiers,a test datum is classified to Ciiff.the ith classifier recognizes this datum.However,probably more than one classifiers recognize it.In this case,we set it belo
10、nging to Ciif theith classifier gives the largest decision value.On the other side,if no classifier recognizesit,we would set it belonging to Ciif the ith classifier gives the smallest decision value forclassifying it to the rest class.One-versus-one combining all possible two-class classifier is an
11、other methodology fordealing with multi-class problems3.The size of classifier grows super-linearly with thenumber of classes but the running time may not because each divided problem is muchsmaller.We used a election strategy to make the final decisions:if the number of i-relativeclassifiers that c
12、lassifying a datum to the ith class is the largest,we would say that thisdatum belongs to Ci.Part-versus-part method is another choice4.Any two-class problem can be further de-composed into a number of two-class sub-problems as small as needed.It is good at dealingwith unbalance classification probl
13、ems.As shown in Table 2,number of training data ineach class from our dataset is just unbalance.Table 2:Number of Training Data in Each ClassClassNumber of Training DataClassNumber of Training Data05376741994758423281545391910046891013385381143We used MAX-MIN strategy to make the final decisions.We
14、also have to determine thesize of minimum parts,which affects the performance of classification a lot.From Figure 3,we chose 200 as the number of each sub-class because the accuracy reach a local maximumand it would make no sense if 1600 is chosen.3.2ResultsIn our experiments,we used the Java versio
15、n of LibSVM2.3255010020040080016000102030405060N per partAccuracyFigure 3:Relationship between Sun-class Size and Accuracyovrovopvp0102030405060Task Decomposition MethodsAccuracy(%)RBFpolynomiallinear(a)Accuracyovrovopvp0102030405060708090100Task Decomposition MethodsTime(s)RBFpolynomiallinear(b)Run
16、ning TimeFigure 4:Performance of Each Task Decomposition Method and Each KernelThe running time and accuracy are shown in Figure 4a and Figure 4b.3.3DiscussionComparing with ovo and pvp,one-versus-one decomposition method always has the worstaccuracy no mater which kernel is used.However,due to the simple procedure,only Nclassifiers are required for a N-class problem.So the scalability of this method is betterthan the others.The one-versus-one decomposition method performed the best in our exper
copyright@ 2008-2022 冰豆网网站版权所有
经营许可证编号:鄂ICP备2022015515号-1