1、 w(i) = 2 * Rnd - 1For j = 1 To 3 For i = 1 To 100 x(j, i) = 4 * (2 * Rnd - 1) Next i sumx(j) = 0Next j求最值 If x(j, i) = Maxx(j) Then Maxx(j) = x(j, i) End If If x(j, i) = x(j, i) - Minx(j) Then R = Maxx(j) - x(j, i) Else R = x(j, i) - Minx(j) Datex(j, i) = (x(j, i) - Meanx(j) / R期望输出For i = 1 To 100
2、 inputx(j) = Datex(j, i) outputx(i) = 2 * (inputx(1) + Sin(inputx(2) + Exp(inputx(3)输出归一化 If Maxd Mind = outputx(i) Ex(i) = (outputx(i) - Mind) / (Maxd - Mind)训练For s = 1 To 5000 Step 1 time(s) = s For p = 1 To 100 cishu(p) = p For i = 1 To 3 MyIn(i) = Datex(i, p) For i = 1 To 5 Temp = Temp + v(i, j
3、) * MyIn(j) s1(i) = Temp y1(i, p) = 1 / (1 + Exp(-s1(i) Temp = y1(i, p) * w(i) + Temp s2 = Temp y2(p) = 1 / (1 + Exp(-s2) deltW(p) = (Ex(p) - y2(p) * y2(p) * (1 - y2(p) deltV(i, p) = deltW(p) * w(i) * y1(i, p) * (1 - y1(i, p) Next p误差 Temp = Temp + (Ex(i) - y2(i) 2 Errorx(s) = Temp调整权值 Dw_1(i) = Dw(
4、i) For j = 1 To 100 Temp = Temp + deltW(j) * y1(i, j) Dw(i) = Temp Dv_1(i, j) = Dv(i, j) For k = 1 To 100 Temp = Temp + deltV(i, k) * Datex(j, k) Next k Dv(i, j) = Temp w(i) = 0.2 * Dw(i) + 0.2 * Dw_1(i) + w(i) For j = 1 To 5 v(j, i) = 0.2 * Dv(j, i) + 0.2 * Dv_1(j, i) + v(j, i)画图 Picture1.Cls Pictu
5、re1.ScaleTop = 1.5 Picture1.ScaleHeight = -2 Picture1.ScaleLeft = -10 Picture1.ScaleWidth = 120 Picture1.Line (-9, 0)-(110, 0) Picture1.Line (0, 0)-(0, 1.5) Picture1.PSet (cishu(i), Ex(i), RGB(128, 128, 0) Picture1.PSet (cishu(i), y2(i), RGB(128, 0, 0) For i = 1 To 99 Picture1.Line (cishu(i), Ex(i)-
6、(cishu(i + 1), Ex(i + 1), RGB(128, 128, 0) Picture1.Line (cishu(i), y2(i)-(cishu(i + 1), y2(i + 1), RGB(128, 0, 0)延时 For j = 1 To 1000 For k = 1 To 50 Picture2.Cls Picture2.Print s DoEventsNext sform2.Command3.Enabled = Trueform2.Command4.Enabled = True泛化Dim test(1 To 3, 1 To 20) As Single, sumE(1 T
7、o 3) As SingleDim MaxE(1 To 3) As Single, MinE(1 To 3) As Single, MeanE(1 To 3) As SingleDim MaxxE As Single, MinxE As SingleDim des(1 To 3) As Single, outE(1 To 20) As SingleDim MIn(1 To 3) As Single, s11(1 To 5) As Single, y11(1 To 5, 1 To 20) As Single, s22 As SingleDim DateE(1 To 3, 1 To 20) As
8、SingleFor i = 1 To 20 test(j, i) = 4 * (2 * Rnd - 1) For i = 1 To 20 If test(j, i) = MaxE(j) Then MaxE(j) = test(j, i) If test(j, i) = test(j, i) - MinE(j) Then R = MaxE(j) - test(j, i) R = test(j, i) - MinE(j) DateE(j, i) = (test(j, i) - MeanE(j) / R求输出 For p = 1 To 20 Ti(p) = p MIn(i) = DateE(i, p
9、) Temp = Temp + v(i, j) * MIn(j) s11(i) = Temp y11(i, p) = 1 / (1 + Exp(-s11(i) Temp = y11(i, p) * w(i) + Temp s22 = Temp y22(p) = 1 / (1 + Exp(-s22)输出及归一化For j = 1 To 20 des(i) = DateE(i, j) outE(j) = 2 * (des(1) + Sin(des(2) + Exp(des(3) If MaxxE MinxE = outE(i) outD(i) = (outE(i) - MinxE) / (Maxx
10、E - MinxE)End SubPrivate mW1() As Double 隐含层的权值 S1 X RPrivate mW2() As Double 输出层的权值 S2 X RPrivate mB1() As Double 隐含层的偏置值 S1 X 1Private mB2() As Double 输出层的偏置值 S2 X 1Private mErr() As Double 均方误差Private mMinMax() As Double 输入向量的上下限 R X 2Private mS1 As Long 隐含层的神经元个数 S1Private mS2 As Long 输出层的神经元个数
11、S2Private mR As Long 输入层神经元个数 RPrivate mGoal As Double 收敛的精度Private mLr As Double 学习速度Private mGama As Double 动量系数Private mMaxEpochs As Long 最大的迭代次数Private mIteration As Long 实际的迭代次数* 中间变量 *Private HiddenOutput() As Double 隐含层的输出Private OutOutput() As Double 输出层的输出Private HiddenErr() As Double 隐含层各神
12、经元的误差Private OutPutErr() As Double 输出层各神经元的误差Private Pdealing() As Double 当前正在处理的输入Private Tdealing() As Double 当前正在处理的输入对应的输出Private OldW1() As Double 旧权值数组Private OldW2() As Double Private OldB1() As Double 旧偏置值数组Private OldB2() As Double Private Ts As Long 输入向量的总个数Private Initialized As Boolean 是
13、否已初始化* 属性 *Public Event Update(iteration)Public Property Get W1() As Double() W1 = mW1End PropertyPublic Property Get W2() As Double() W2 = mW2Public Property Get B1() As Double() B1 = mB1Public Property Get B2() As Double() B2 = mB2Public Property Get Err() As Double() Err = mErrPublic Property Get
14、 S1() As Long S1 = mS1Public Property Let S1(Value As Long) mS1 = ValuePublic Property Get S2() As Long S2 = mS2Public Property Get R() As Long R = mRPublic Property Get Goal() As Double Goal = mGoalPublic Sub MinMax(Value() As Double) mMinMax = ValuePublic Property Let Goal(Value As Double) mGoal =
15、 ValuePublic Property Get Lr() As Double Lr = mLrPublic Property Let Lr(Value As Double) mLr = ValuePublic Property Get Gama() As Double Gama = mGamaPublic Property Let Gama(Value As Double) mGama = ValuePublic Property Get MaxEpochs() As Long MaxEpochs = mMaxEpochsPublic Property Let MaxEpochs(Valu
16、e As Long) mMaxEpochs = ValuePublic Property Get iteration() As Long iteration = mIteration* 初始化 *Private Sub Class_Initialize() mS1 = 5 mGoal = 0.0001 mLr = 0.1 mGama = 0.8 mMaxEpochs = 1000* 训练 *过 程 名: Train参 数: P - 输入矩阵 T 输出矩阵作 者: laviewpbt时 间: 2006-11-15Public Sub Train(P() As Double, T() As Dou
17、ble) Dim i As Long, j As Long, Index As Long Dim NmP() As Double mR = UBound(P, 1) 输入向量的元素个数 mS2 = UBound(T, 1) 输出层神经元的个数 Ts = UBound(P, 2) 输入向量的个数 NmP = CopyArray(P) 保留原始的P,因为正规化的过程中会破坏原始数据 IniParameters NmP 初始化参数和数组 mIteration = 0 For i = 1 To mMaxEpochs mIteration = mIteration + 1 Index = Int(Rnd
18、 * Ts + 1) 随机选取一个输入向量作为训练样本,这样效果比按顺序循环要好 For j = 1 To mR Pdealing(j) = NmP(j, Index) 正在处理的输入向量 Next For j = 1 To mS2 Tdealing(j) = T(j, Index) 正在处理的输出向量 HiddenLayer 计算隐含层各神经元的输出 OutputLayer 计算输出层各神经元的输出 OutError 计算输出层各神经元的误差 HiddenError 计算隐含层各神经元的误差 Update_W2B2 更新隐含层至输出层之间的连接权及输出层节点的偏置值 Update_W1B1
19、更新输入层至隐含层之间的连接权及隐含层节点的偏置值 If iteration Mod 1000 = 0 Then RaiseEvent Update(mIteration) If mErr(mIteration) mGoal Then Exit Sub 达到要求,完成学习,退出* 初始化数据 *Private Sub IniParameters(P() As Double) Dim i As Long, j As Long ReDim mW1(mS1, mR) As Double, mW2(mS2, mS1) As Double ReDim mB1(mS1) As Double, mB2(mS
20、2) As Double ReDim OldW1(mS1, mR) As Double, OldW2(mS2, mS1) As Double ReDim OldB1(mS1) As Double, OldB2(mS2) As Double ReDim HiddenOutput(mS1) As Double, OutOutput(mS2) As Double ReDim HiddenErr(mS1) As Double, OutPutErr(mS2) As Double ReDim Pdealing(mR) As Double, Tdealing(mS2) As Double ReDim mErr(mMaxEpochs) A
copyright@ 2008-2022 冰豆网网站版权所有
经营许可证编号:鄂ICP备2022015515号-1