接下来的步骤貌似大家都应该知道了,多了一个C常量的限制条件,然后继续用SMO算法优化求解二次规划,但是我想继续把核函数也一次说了,如果样本线性不可分,引入核函数后,把样本映射到高维空间就可以线性可分,如(图六)所示的线性不可分的样本:
(图六)
在(图六)中,现有的样本是很明显线性不可分,但是加入我们利用现有的样本X之间作些不同的运算,如(图六)右边所示的样子,而让f作为新的样本(或者说新的特征)是不是更好些?
现在把X已经投射到高维度上去了,但是f我们不知道,此时核函数就该上场了,以高斯核函数为例,在(图七)中选几个样本点作为基准点,来利用核函数计算f,如(图七)所示:
(图七)
这样就有了f,而核函数此时相当于对样本的X和基准点一个度量,做权重衰减,形成依赖于x的新的特征f,把f放在上面说的SVM中继续求解alpha,然后得出权重就行了,原理很简单吧,为了显得有点学术味道,把核函数也做个样子加入目标函数中去吧,如(公式十一)所示:
(公式十一)
其中K(Xn,Xm)是核函数,和上面目标函数比没有多大的变化,用SMO优化求解就行了,代码如下:
[python] viewplaincopy
1.def smoPK(dataMatIn, classLabels, C, toler, maxIter):
#full Platt SMO
2. oS = optStruct(mat(dataMatIn),mat(classLabels).transpose(),C,toler)
3. iter = 0
4. entireSet = True; alphaPairsChanged = 0
5. while (iter < maxIter) and ((alphaPairsChanged > 0) or (entireSet)):
6. alphaPairsChanged = 0
7. if entireSet:
#go over all
8. for i in range(oS.m):
9. alphaPairsChanged += innerL(i,oS)
10. print "fullSet, iter:
%d i:
%d, pairs changed %d" % (iter,i,alphaPairsChanged)
11. iter += 1
12. else:
#go over non-bound (railed) alphas
13. nonBoundIs = nonzero((oS.alphas.A > 0) * (oS.alphas.A < C))[0]
14. for i in nonBoundIs:
15. alphaPairsChanged += innerL(i,oS)
16. print "non-bound, iter:
%d i:
%d, pairs changed %d" % (iter,i,alphaPairsChanged)
17. iter += 1
18. if entireSet:
entireSet = False #toggle entire set loop
19. elif (alphaPairsChanged == 0):
entireSet = True
20. print "iteration number:
%d" % iter
21. return oS.b,oS.alphas
下面演示一个小例子,手写识别。
(1)收集数据:
提供文本文件
(2)准备数据:
基于二值图像构造向量
(3)分析数据:
对图像向量进行目测
(4)训练算法:
采用两种不同的核函数,并对径向基函数采用不同的设置来运行SMO算法。
(5)测试算法:
编写一个函数来测试不同的核函数,并计算错误率
(6)使用算法:
一个图像识别的完整应用还需要一些图像处理的只是,此demo略。
完整代码如下:
[python] viewplaincopy
1.from numpy import *
2.from time import sleep
3.
4.def loadDataSet(fileName):
5. dataMat = []; labelMat = []
6. fr = open(fileName)
7. for line in fr.readlines():
8. lineArr = line.strip().split('\t')
9. dataMat.append([float(lineArr[0]), float(lineArr[1])])
10. labelMat.append(float(lineArr[2]))
11. return dataMat,labelMat
12.
13.def selectJrand(i,m):
14. j=i #we want to select any J not equal to i
15. while (j==i):
16. j = int(random.uniform(0,m))
17. return j
18.
19.def clipAlpha(aj,H,L):
20. if aj > H:
21. aj = H
22. if L > aj:
23. aj = L
24. return aj
25.
26.def smoSimple(dataMatIn, classLabels, C, toler, maxIter):
27. dataMatrix = mat(dataMatIn); labelMat = mat(classLabels).transpose()
28. b = 0; m,n = shape(dataMatrix)
29. alphas = mat(zeros((m,1)))
30. iter = 0
31. while (iter < maxIter):
32. alphaPairsChanged = 0
33. for i in range(m):
34. fXi = float(multiply(alphas,labelMat).T*(dataMatrix*dataMatrix[i,:
].T)) + b
35. Ei = fXi - float(labelMat[i])#if checks if an example violates KKT conditions
36. if ((labelMat[i]*Ei < -toler) and (alphas[i] < C)) or ((labelMat[i]*Ei > toler) and (alphas[i] > 0)):
37. j = selectJrand(i,m)
38. fXj = float(multiply(alphas,labelMat).T*(dataMatrix*dataMatrix[j,:
].T)) + b
39. Ej = fXj - float(labelMat[j])
40. alphaIold = alphas[i].copy(); alphaJold = alphas[j].copy();
41. if (labelMat[i] !
= labelMat[j]):
42. L = max(0, alphas[j] - alphas[i])
43. H = min(C, C + alphas[j] - alphas[i])
44. else:
45. L = max(0, alphas[j] + alphas[i] - C)
46. H = min(C, alphas[j] + alphas[i])
47. if L==H:
print "L==H"; continue
48. eta = 2.0 * dataMatrix[i,:
]*dataMatrix[j,:
].T - dataMatrix[i,:
]*dataMatrix[i,:
].T - dataMatrix[j,:
]*dataMatrix[j,:
].T
49. if eta >= 0:
print "eta>=0"; continue
50. alphas[j] -= labelMat[j]*(Ei - Ej)/eta
51. alphas[j] = clipAlpha(alphas[j],H,L)
52. if (abs(alphas[j] - alphaJold) < 0.00001):
print "j not moving enough"; continue
53. alphas[i] += labelMat[j]*labelMat[i]*(alphaJold - alphas[j])#update i by the same amount as j
54. #the update is in the oppostie direction
55. b1 = b - Ei- labelMat[i]*(alphas[i]-alphaIold)*dataMatrix[i,:
]*dataMatrix[i,:
].T - labelMat[j]*(alphas[j]-alphaJold)*dataMatrix[i,:
]*dataMatrix[j,:
].T
56. b2 = b - Ej- labelMat[i]*(alphas[i]-alphaIold)*dataMatrix[i,:
]*dataMatrix[j,:
].T - labelMat[j]*(alphas[j]-alphaJold)*dataMatrix[j,:
]*dataMatrix[j,:
].T
57. if (0 < alphas[i]) and (C > alphas[i]):
b = b1
58. elif (0 < alphas[j]) and (C > alphas[j]):
b = b2
59. else:
b = (b1 + b2)/2.0
60. alphaPairsChanged += 1
61. print "iter:
%d i:
%d, pairs changed %d" % (iter,i,alphaPairsChanged)
62. if (alphaPairsChanged == 0):
iter += 1
63. else:
iter = 0
64. print "iteration number:
%d" % iter
65. return b,alphas
66.
67.def kernelTrans(X, A, kTup):
#calc the kernel or transform data to a higher dimensional space
68. m,n = shape(X)
69. K = mat(zeros((m,1)))
70. if kTup[0]=='lin':
K = X * A.T #linear kernel
71. elif kTup[0]=='rbf':
72. for j in range(m):
73. deltaRow = X[j,:
] - A
74. K[j] = deltaRow*deltaRow.T
75. K = exp(K/(-1*kTup[1]**2)) #divide in NumPy is element-wise not matrix like Matlab
76. else:
raise NameError('Houston We Have a Problem -- \
77. That Kernel is not recognized')
78. return K
79.
80.class optStruct:
81. def __init__(self,dataMatIn, classLabels, C, toler, kTup):
# Initialize the structure with the parameters
82. self.X = dataMatIn
83. self.labelMat = classLabels
84. self.C