ImageVerifierCode 换一换
格式:DOCX , 页数:18 ,大小:209.42KB ,
资源ID:3612245      下载积分:12 金币
快捷下载
登录下载
邮箱/手机:
温馨提示:
快捷下载时,用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)。 如填写123,账号就是123,密码也是123。
特别说明:
请自助下载,系统不会自动发送文件的哦; 如果您已付费,想二次下载,请登录后访问:我的下载记录
支付方式: 支付宝    微信支付   
验证码:   换一换

加入VIP,免费下载
 

温馨提示:由于个人手机设置不同,如果发现不能下载,请复制以下地址【https://www.bdocx.com/down/3612245.html】到电脑端继续下载(重复下载不扣费)。

已注册用户请登录:
账号:
密码:
验证码:   换一换
  忘记密码?
三方登录: 微信登录   QQ登录  

下载须知

1: 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。
2: 试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。
3: 文件的所有权益归上传用户所有。
4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
5. 本站仅提供交流平台,并不能对任何下载内容负责。
6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

版权提示 | 免责声明

本文(BP神经网络实例含源码.docx)为本站会员(b****6)主动上传,冰豆网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知冰豆网(发送邮件至service@bdocx.com或直接QQ联系客服),我们立即给予删除!

BP神经网络实例含源码.docx

1、BP神经网络实例含源码BP神经网络实例含源码BP神经网络算法实现一:关于BP网络BP (Back Propagation) 神经网络,即误差反传误差反向传播算法的学习过 程,由信息的正向传播和误差的反向传播两个过程组成。输入层各神经元负责接收 来自外界的输入信息,并传递给中间层各神经元 ; 中间层是内部信息处理层,负责 信息变换,根据信息变化能力的需求,中间层可以设计为单隐层或者多隐层结构 ; 最后一个隐层传递到输出层各神经元的信息,经进一步处理后,完成一次学习的正 向传播处理过程,由输出层向外界输出信息处理结果。当实际输出与期望输出不符时,进入误差的反向传播阶段。误差通过输出层, 按误差梯度

2、下降的方式修正各层权值,向隐层、输入层逐层反传。周而复始的信息 正向传播和误差反向传播过程,是各层权值不断调整的过程,也是神经网络学习训 练的过程,此过程一直进行到网络输出的误差减少到可以接受的程度,或者预先设 定的学习次数为止。BP网络主要应用于以下方面:函数逼近、模式识别和分类、数据压缩。 BP神经网络有较强的泛化性能,使网络平滑的逼近函数,能合理的响应被训练以外的输 入。同时,BP网络又有自己的限制与不足,主要表现在:需要较长的训练时间、网 络训练的结果可能使得权值逼近局部最优、训练数据范围外的数据泛化能力较差。为了避免训练陷入局部最优解,本程序采用改进的 BP网络训练,既加入动量因子,

3、使得网络在最优解附近有一定的震荡,跳出局部最优的范围。BP网络训练中学习速率与动量因子的选择很重要,在后面的内容中将进行详细的讨论:训练的函数程序中训练的函数为一个三输入一输出的非线性函数,如下所示x3xR,yxxe, , 2sin() , , , 12网络结构为:3 5 1三:程序及相关界面(VB)1主界面1代码:Private Sub Comma nd1_Click() form2.Visible = FalseForm3.Visible = TrueEnd SubPrivate Sub Comma nd2_Click() form2.Visible = FalseForm1.Visibl

4、e = TrueEnd SubPrivate Sub Comma nd3_Click() form2.Visible = FalseForm4.Visible = TrueEnd SubPrivate Sub Comma nd4_Click() form2.Visible = FalseForm5.Visible = TrueEnd SubPrivate Sub Comma nd5_Click() EndEnd SubPrivate Sub Form_Load() Comma nd3.E nabled = False Comma nd4.E nabled = False End Sub2查看网

5、络结构2代码:Private Sub Comma nd1_Click()Form3.Visible = False form2.Visible = True End Sub3网络训练代码:3Private Sub Command1_Click()Form1.Visible = Falseform2.Visible = TrueEnd SubPrivate Sub Command2_Click()Label2.C aptio n = 样本训练中Dim i As Integer, j As Integer, k As Integer, p As Integer, s AsSingleDim Max

6、x(1 To 3) As Single, Minx(1 To 3) As Single, Meanx(1 To 3) AsSingleDim x(1 To 3, 1 To 100) As Single, sumx(1 To 3) As Single, Temp As SingleDim Datex(1 To 3, 1 To 100) As Single, inputx(1 To 3) As Single, outputx(1 To 100) As SingleDim Ex(1 To 100) As SingleDim time(1 To 5000) As Integer, cishu(1 To

7、 100) As IntegerDim Dv_1(1 To 5, 1 To 3) As Single, Dw_1(1 To 5) As SingleDim R As SingleDim Maxd As Single, Mind As SingleDim s1(1 To 5) As Single, y1(1 To 5, 1 To 100) As Single, s2 As Single, y2(1 To 100) As SingleDim deltW(1 To 100) As Single, deltV(1 To 5, 1 To 100) As SingleDim Dw(1 To 5) As S

8、ingle, Dv(1 To 5, 1 To 3) As SingleDim MyIn(1 To 3) As SingleDim Errorx(1 To 5000) As SingleRandomizeFor i = 1 To 3Maxx(i) = 0Minx(i) = 0Meanx(i) = 0Next iTemp = 0Maxd = 0Mind = 0For i = 1 To 5For j = 1 To 3Dv_1(i, j) = 0v(i, j) = 2 * Rnd - 1Next jDw_1(i) = 0w(i) = 2 * Rnd - 1Next iFor j = 1 To 3For

9、 i = 1 To 100x(j, i) = 4 * (2 * Rnd - 1)Next isumx(j) = 0Next j 求最值For j = 1 To 3For i = 1 To 100If x(j, i) = Maxx(j) ThenMaxx(j) = x(j, i)End IfIf x(j, i) = x(j, i) - Minx(j) Then R = Maxx(j) - x(j, i)ElseR = x(j, i) - Minx(j)End IfDatex(j, i) = (x(j, i) - Meanx(j) / RNext iNext j 期望输出For i = 1 To

10、100For j = 1 To 3 inputx(j) = Datex(j, i)Next joutputx(i) = 2 * (inputx(1) + Sin(inputx(2) + Exp(inputx(3) Next i 输出归一化For i = 1 To 100If Maxd = outputx(i) ThenMind = outputx(i)End IfNext iFor i = 1 To 100Ex(i) = (outputx(i) - Mind) / (Maxd - Mind)Next i训练For s = 1 To 5000 Step 1 time(s) = sFor p =

11、1 To 100cishu(p) = pFor i = 1 To 3MyIn(i) = Datex(i, p)Next iFor i = 1 To 5For j = 1 To 3Temp = Temp + v(i, j) * MyIn(j)Next js1(i) = TempTemp = 0Next iFor i = 1 To 5y1(i, p) = 1 / (1 + Exp(-s1(i)Next iFor i = 1 To 3Temp = y1(i, p) * w(i) + TempNext is2 = TempTemp = 0y2(p) = 1 / (1 + Exp(-s2)deltW(p

12、) = (Ex(p) - y2(p) * y2(p) * (1 - y2(p)For i = 1 To 5deltV(i, p) = deltW(p) * w(i) * y1(i, p) * (1 - y1(i, p) Next iNext p 误差For i = 1 To 100Temp = Temp + (Ex(i) - y2(i) A 2Next iErrorx(s) = TempTemp = 0 调整权值For i = 1 To 5Dw_1(i) = Dw(i)Next iFor i = 1 To 5For j = 1 To 100Temp = Temp + deltW(j) * y1

13、(i, j)Next jDw(i) = TempTemp = 0Next iFor i = 1 To 5For j = 1 To 3Dv_1(i, j) = Dv(i, j)Next jNext iFor i = 1 To 5For j = 1 To 3For k = 1 To 100Temp = Temp + deltV(i, k) * Datex(j, k)Next kDv(i, j) = TempTemp = 0Next jNext iFor i = 1 To 5w(i) = 0.2 * Dw(i) + 0.2 * Dw_1(i) + w(i)Next iFor i = 1 To 3Fo

14、r j = 1 To 5v(j, i) = 0.2 * Dv(j, i) + 0.2 * Dv_1(j, i) + v(j, i)Next jNext i 画图Picture1.ClsPicture1.ScaleTop = 1.5Picture1.ScaleHeight = -2Picture1.ScaleLeft = -10Picture1.ScaleWidth = 120Picture1.Line (-9, 0)-(110, 0)Picture1.Line (0, 0)-(0, 1.5)For i = 1 To 100Picture1.PSet (cishu(i), Ex(i), RGB(

15、128, 128, 0)Picture1.PSet (cishu(i), y2(i), RGB(128, 0, 0)Next iFor i = 1 To 99Picture1.Line (cishu(i), Ex(i)-(cishu(i + 1), Ex(i + 1), RGB(128, 128, 0)Picture1.Line (cishu(i), y2(i)-(cishu(i + 1), y2(i + 1), RGB(128, 0, 0)6Next i 延时For j = 1 To 1000For k = 1 To 50Next kNext jPicture2.ClsPicture2.Pr

16、int sDoEventsNext s泛化Label2.Caption = form2.Command3.Enabled = True form2.Command4.Enabled = True Dim test(1 To 3, 1 To 20) As Single, sumE(1 To 3) As SingleDim MaxE(1 To 3) As Single, MinE(1 To 3) As Single, MeanE(1 To 3) As SingleDim MaxxE As Single, MinxE As Single Dim des(1 To 3) As Single, outE

17、(1 To 20) As SingleDim MIn(1 To 3) As Single, s11(1 To 5) As Single, y11(1 To 5, 1 To20) As Single, s22 As SingleDim DateE(1 To 3, 1 To 20) As SingleFor i = 1 To 20For j = 1 To 3test(j, i) = 4 * (2 * Rnd - 1)Next jNext iFor j = 1 To 3For i = 1 To 20If test(j, i) = MaxE(j) ThenMaxE(j) = test(j, i)End

18、 IfIf test(j, i) = test(j, i) - MinE(j) ThenR = MaxE(j) - test(j, i)ElseR = test(j, i) - MinE(j)End IfDateE(j, i) = (test(j, i) - MeanE(j) / RNext iNext j 求输出For p = 1 To 20Ti(p) = pFor i = 1 To 3MIn(i) = DateE(i, p)Next iFor i = 1 To 5For j = 1 To 3Temp = Temp + v(i, j) * MIn(j) Next js11(i) = Temp

19、Temp = 0 Next iFor i = 1 To 5y11(i, p) = 1 / (1 + Exp(-s11(i) Next iFor i = 1 To 3Temp = y11(i, p) * w(i) + Temp Next is22 = TempTemp = 0 y22(p) = 1 / (1 + Exp(-s22)Next p 输出及归一化For j = 1 To 20 For i = 1 To 3 des(i) = DateE(i, j)Next ioutE(j) = 2 * (des(1) + Sin(des(2) + Exp(des(3)Next j 输出归一化For i

20、= 1 To 20If MaxxE = outE(i) The nMi nxE = outE(i)End IfNext iFor i = 1 To 20outD(i) = (outE(i) - Min xE) / (MaxxE - Min xE)Next iEnd Sub4查看训练结果代码:Private Sub Comma nd1_Click() Form5.Visible = Falseform2.Visible = TrueEnd SubPrivate Sub Comma nd2_Click() Picturel.CIsPicture2.ClsDim i As In teger, j A

21、s In teger For i = 1 To 5Forj = 1 To 3Picture2.Pri nt v(i, j); Spc ;Next jPicture2.Pri ntPicture2.Pri ntPicture1.Pri nt w(i); Next iEnd Sub5泛化代码:Private Sub Comma nd1_Click() Form4.Visible = False form2.Visible = TrueEnd SubPrivate Sub Comma nd2_Click() For s = 1 To 20卩 icture1.ClsPicture1.ScaleTop

22、= 1.5Picture1.ScaleHeight = -2Picture1.ScaleLeft = -5Picture1.ScaleWidth = 30Picture1.Line (-5, 0)-(25, 0)Picture1.Line (0, -0.5)-(0, 1.5)For i = 1 To 20Picture1.PSet (Ti(i), outD(i), RGB(128, 128, 0)Picture1.PSet (Ti(i), y22(i), RGB(128, 0, 0)9Next iFor i = 1 To 19Picture1.Line (Ti(i), outD(i)-(Ti(

23、i + 1), outD(i + 1), RGB(128,128, 0)Picture1.Line (Ti(i), y22(i)-(Ti(i + 1), y22(i + 1), RGB(128, 0, 0) Next iNext sEnd Sub6 全局模块Public w(1 To 5) As Single, v(1 To 5, 1 To 3) As Single Public Ti(1To 20) As Single, y22(1 To 20) As Single, outD(1 To 20) As Single 四: 相关分析及讨论 以上编程实现了对一个三输入、一输出非线性函数的逼近,在

24、模型训练中采用进的BP网络一一动量因子法,输入是随机产生的 100组数据,输出是通过已知函数得到的相应期望输出,通过 BP网络的5000代训练可以与期望输出拟合的很 好,泛化也较理想,训练误差和泛化误差都在可接受范围内。前已提及,在BP网络训练中,学习速率和动量因子的选择是很重要的,一下分别对两者进行研究:1关于学习速率对于前述函数,分别取学习速率为:0.01,0. 2, 0.3, 0.5, 0.9 可得训练结果如下:11从对网络的训练及相应的曲线可以看出,学习速率小的话训练稳定,但较慢, 由于BP网络采用的是梯度下降法,训练的好坏很大程度上决定于开始的若干步, 如果学习速率过小,开始训练过于

25、缓慢,则有可能使整个训练最终无法达到要求。 学习速率越大,训练越快,但稳定性变差,当大于一定数值后甚至最终无法收敛。 基于此,弓I进动量项,以改进训练性能。2关于学习速率一下分别取动量因子为:0.01,0.1,0.2, 0.5, 0.8,( 学习速率为0.5)结果如下:有训练过程及相应曲线可已看出,当引进动量项以后即使在较大的学习速率 下,训练过程依然较平稳,而且由于在每次修改权值的时候考虑了前一次的修改作 用,使得权值的修正更具合理性,但是,如果动量项过大的话亦会使得训练过程震 荡,无法寻得最优值。综上,在BP神经网络的训练中综合考虑学习系数和动量因子,可以使得训练过程即迅速而且平稳,能在较快的时间内找到最佳的权值组合。16

copyright@ 2008-2022 冰豆网网站版权所有

经营许可证编号:鄂ICP备2022015515号-1