上海交通大学神经网络原理与应用作业1.pdf

上传人:b****1 文档编号:3175680 上传时间:2022-11-19 格式:PDF 页数:3 大小:156.03KB
下载 相关 举报
上海交通大学神经网络原理与应用作业1.pdf_第1页
第1页 / 共3页
上海交通大学神经网络原理与应用作业1.pdf_第2页
第2页 / 共3页
上海交通大学神经网络原理与应用作业1.pdf_第3页
第3页 / 共3页
亲,该文档总共3页,全部预览完了,如果喜欢就下载吧!
下载资源
资源描述

上海交通大学神经网络原理与应用作业1.pdf

《上海交通大学神经网络原理与应用作业1.pdf》由会员分享,可在线阅读,更多相关《上海交通大学神经网络原理与应用作业1.pdf(3页珍藏版)》请在冰豆网上搜索。

上海交通大学神经网络原理与应用作业1.pdf

NeuralNetworkTheoryandApplicationsHomeworkAssignment1oxstarSJTUJanuary19,2012ProblemoneOnevariationoftheperceptronlearningruleisWnew=Wold+epTbnew=bold+ewhereiscalledthelearningrate.Proveconvergenceofthisalgorithm.Doestheproofrequirealimitonthelearningrate?

Explain.Proof.Wewillcombinethepresentationofweightmatrixandthebiasintoasinglevector:

x=?

Wb?

zq=?

pq1?

.

(1)Sothenetinputandtheperceptronlearningrulecanbewrittenas:

WTp+b=xTz,

(2)xnew=xold+ez.(3)Weonlytakethoseiterationsforwhichtheweightvectorischangedintoaccount,sothelearningrulebecomes(WLOG,assumethatx(0)=0)x(k)=x(k1)+dz(k1)(4)=dz(0)+dz

(1)+.+dz(k1)(5)wheredzzQ,.,z1,z1,.,zQ.Assumethatthecorrectweightvectorisx,wecansay,xTdz0(6)FromEquation5andEquation6wecanshowthatxTx(k)=xTdz(0)+xTdz

(1)+.+xTdz(k1)(7)k(8)FromtheCauchy-Schwartzinequalitywehave|x|2|x(k)|2(xTx(k)2(k)2(9)1Theweightswillbeupdatedifthepreviousweightswereincorrect,sowehavexTdz0and|x(k)|2=xT(k)x(k)(10)=x(k1)+dz(k1)Tx(k1)+dz(k1)(11)=|x(k1)|2+2xT(k1)dz(k1)+2|dz(k1)|2(12)|x(k1)|2+2|dz(k1)|2(13)2|dz(0)|2+.+2|dz(k1)|2(14)2kmax(|dz|2)(15)FromEquation9andEquation15wehave(k)2|x|2|x(k)|2|x|22kmax(|dz|2)(16)k|x|2max(|dz|2)2(17)Nowwehavefoundthattheproofdoesnotrequirealimitonthelearningrateforthereasonthatkhasnorelationswiththelearningrate.Intuitively,thisproblemequalstoallinputsbeingmultipliedbythelearningrate(ez=e(z).Thecostwillnotchangealottofindthecorrectboundarytoproportionaldata.ProblemtwoWehaveaclassificationproblemwiththreeclassesofinputvectors.Thethreeclassesareclass1:

?

p1=?

11?

p2=?

02?

p3=?

31?

class2:

?

p4=?

21?

p5=?

20?

p6=?

12?

class3:

?

p7=?

12?

p8=?

21?

p9=?

11?

Implementtheperceptronnetworkbasedonthelearningruleofproblemonetosolvethisproblem.Runyourproblematdifferentlearningrate(=1,0:

8,0:

5,0:

3,0:

1),compareanddiscusstheresults.Ans.Inmyexperiment,thelearningratesaresetform0.1to1withstep=0.05.HereIshowthetimesofiterationatdifferentlearningrates(Figure1).Notethatthisperceptronalgorithmcanonlyhandletwo-classproblems,soIusetwophasesofclassificationtosolvethethree-classproblem.Icanhardlyfindtherelationshipsbetweenthelearningratesandtimesofiteration.Justaswhathavebeenprovedabove,thecostwillnotbeinfluencedalottofindthecorrectboundaryatdifferentlearningrates.Ialsoprovedtheresultsforclassificationatlearningrate=1(Figure2).ProblemthreeFortheproblemXORasfollows,class1:

?

p1=?

10?

p2=?

01?

class2:

?

p3=?

00?

p4=?

11?

20.10.20.30.40.50.60.70.80.91051015202530LearningratesTimesoflearningClassification1Classification2Figure1:

TimesofIterationatDifferentLearningRates21.510.500.511.522.53302520151050510Class1Class2Class3Figure2:

ResultforClassification(P2)0.500.511.53.532.521.510.500.51Class1Class2Figure3:

ResultforClassification(XORProblem)4321012345432101234Class1Class2Figure4:

ResultforClassification(TwoSpiralProblem)andthetwospiralproblemdeliveredasthematerial,couldtheperceptronalgorithmcor-rectlyclassifythesetwoproblems?

Ifnot,explainwhy.Ans.Justastheresults(Figure3andFigure4)show,theperceptronalgorithmcannotcorrectlyclassifythesetwoproblems.Asweknow,thesingle-layerperceptronscanonlyclassifylinearlyseparablevectorswhilethevectorsinthesetwoproblemsarejustlinearlyinseparable.3

展开阅读全文
相关资源
猜你喜欢
相关搜索

当前位置:首页 > 经管营销 > 销售营销

copyright@ 2008-2022 冰豆网网站版权所有

经营许可证编号:鄂ICP备2022015515号-1