一区二区三区在线-一区二区三区亚洲视频-一区二区三区亚洲-一区二区三区午夜-一区二区三区四区在线视频-一区二区三区四区在线免费观看

腳本之家,腳本語言編程技術及教程分享平臺!
分類導航

Python|VBS|Ruby|Lua|perl|VBA|Golang|PowerShell|Erlang|autoit|Dos|bat|

服務器之家 - 腳本之家 - Python - python機器學習理論與實戰(六)支持向量機

python機器學習理論與實戰(六)支持向量機

2021-01-06 00:31marvin521 Python

這篇文章主要介紹了python機器學習理論與實戰第六篇,支持向量機的相關資料,具有一定的參考價值,感興趣的小伙伴們可以參考一下

上節基本完成了SVM的理論推倒,尋找最大化間隔的目標最終轉換成求解拉格朗日乘子變量alpha的求解問題,求出了alpha即可求解出SVM的權重W,有了權重也就有了最大間隔距離,但是其實上節我們有個假設:就是訓練集是線性可分的,這樣求出的alpha在[0,infinite]。但是如果數據不是線性可分的呢?此時我們就要允許部分的樣本可以越過分類器,這樣優化的目標函數就可以不變,只要引入松弛變量python機器學習理論與實戰(六)支持向量機即可,它表示錯分類樣本點的代價,分類正確時它等于0,當分類錯誤時python機器學習理論與實戰(六)支持向量機,其中Tn表示樣本的真實標簽-1或者1,回顧上節中,我們把支持向量到分類器的距離固定為1,因此兩類的支持向量間的距離肯定大于1的,當分類錯誤時python機器學習理論與實戰(六)支持向量機肯定也大于1,如(圖五)所示(這里公式和圖標序號都接上一節)。

python機器學習理論與實戰(六)支持向量機

(圖五)

       這樣有了錯分類的代價,我們把上節(公式四)的目標函數上添加上這一項錯分類代價,得到如(公式八)的形式:

python機器學習理論與實戰(六)支持向量機

(公式八)

重復上節的拉格朗日乘子法步驟,得到(公式九):

python機器學習理論與實戰(六)支持向量機

(公式九)

         多了一個Un乘子,當然我們的工作就是繼續求解此目標函數,繼續重復上節的步驟,求導得到(公式十):

 python機器學習理論與實戰(六)支持向量機

(公式十)

         又因為alpha大于0,而且Un大于0,所以0<alpha<C,為了解釋的清晰一些,我們把(公式九)的KKT條件也發出來(上節中的第三類優化問題),注意Un是大于等于0

python機器學習理論與實戰(六)支持向量機

      推導到現在,優化函數的形式基本沒變,只是多了一項錯分類的價值,但是多了一個條件,0<alpha<C,C是一個常數,它的作用就是在允許有錯誤分類的情況下,控制最大化間距,它太大了會導致過擬合,太小了會導致欠擬合。接下來的步驟貌似大家都應該知道了,多了一個C常量的限制條件,然后繼續用SMO算法優化求解二次規劃,但是我想繼續把核函數也一次說了,如果樣本線性不可分,引入核函數后,把樣本映射到高維空間就可以線性可分,如(圖六)所示的線性不可分的樣本:

python機器學習理論與實戰(六)支持向量機

(圖六)

         在(圖六)中,現有的樣本是很明顯線性不可分,但是加入我們利用現有的樣本X之間作些不同的運算,如(圖六)右邊所示的樣子,而讓f作為新的樣本(或者說新的特征)是不是更好些?現在把X已經投射到高維度上去了,但是f我們不知道,此時核函數就該上場了,以高斯核函數為例,在(圖七)中選幾個樣本點作為基準點,來利用核函數計算f,如(圖七)所示:

python機器學習理論與實戰(六)支持向量機

(圖七)

       這樣就有了f,而核函數此時相當于對樣本的X和基準點一個度量,做權重衰減,形成依賴于x的新的特征f,把f放在上面說的SVM中繼續求解alpha,然后得出權重就行了,原理很簡單吧,為了顯得有點學術味道,把核函數也做個樣子加入目標函數中去吧,如(公式十一)所示:

 python機器學習理論與實戰(六)支持向量機

(公式十一) 

        其中K(Xn,Xm)是核函數,和上面目標函數比沒有多大的變化,用SMO優化求解就行了,代碼如下:

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
def smoPK(dataMatIn, classLabels, C, toler, maxIter): #full Platt SMO
 oS = optStruct(mat(dataMatIn),mat(classLabels).transpose(),C,toler)
 iter = 0
 entireSet = True; alphaPairsChanged = 0
 while (iter < maxIter) and ((alphaPairsChanged > 0) or (entireSet)):
  alphaPairsChanged = 0
  if entireSet: #go over all
   for i in range(oS.m):  
    alphaPairsChanged += innerL(i,oS)
    print "fullSet, iter: %d i:%d, pairs changed %d" % (iter,i,alphaPairsChanged)
   iter += 1
  else:#go over non-bound (railed) alphas
   nonBoundIs = nonzero((oS.alphas.A > 0) * (oS.alphas.A < C))[0]
   for i in nonBoundIs:
    alphaPairsChanged += innerL(i,oS)
    print "non-bound, iter: %d i:%d, pairs changed %d" % (iter,i,alphaPairsChanged)
   iter += 1
  if entireSet: entireSet = False #toggle entire set loop
  elif (alphaPairsChanged == 0): entireSet = True
  print "iteration number: %d" % iter
 return oS.b,oS.alphas

下面演示一個小例子,手寫識別。

      (1)收集數據:提供文本文件

      (2)準備數據:基于二值圖像構造向量

      (3)分析數據:對圖像向量進行目測

      (4)訓練算法:采用兩種不同的核函數,并對徑向基函數采用不同的設置來運行SMO算法。

       (5)測試算法:編寫一個函數來測試不同的核函數,并計算錯誤率

       (6)使用算法:一個圖像識別的完整應用還需要一些圖像處理的只是,此demo略。

完整代碼如下:

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
from numpy import *
from time import sleep
 
def loadDataSet(fileName):
 dataMat = []; labelMat = []
 fr = open(fileName)
 for line in fr.readlines():
  lineArr = line.strip().split('\t')
  dataMat.append([float(lineArr[0]), float(lineArr[1])])
  labelMat.append(float(lineArr[2]))
 return dataMat,labelMat
 
def selectJrand(i,m):
 j=i #we want to select any J not equal to i
 while (j==i):
  j = int(random.uniform(0,m))
 return j
 
def clipAlpha(aj,H,L):
 if aj > H:
  aj = H
 if L > aj:
  aj = L
 return aj
 
def smoSimple(dataMatIn, classLabels, C, toler, maxIter):
 dataMatrix = mat(dataMatIn); labelMat = mat(classLabels).transpose()
 b = 0; m,n = shape(dataMatrix)
 alphas = mat(zeros((m,1)))
 iter = 0
 while (iter < maxIter):
  alphaPairsChanged = 0
  for i in range(m):
   fXi = float(multiply(alphas,labelMat).T*(dataMatrix*dataMatrix[i,:].T)) + b
   Ei = fXi - float(labelMat[i])#if checks if an example violates KKT conditions
   if ((labelMat[i]*Ei < -toler) and (alphas[i] < C)) or ((labelMat[i]*Ei > toler) and (alphas[i] > 0)):
    j = selectJrand(i,m)
    fXj = float(multiply(alphas,labelMat).T*(dataMatrix*dataMatrix[j,:].T)) + b
    Ej = fXj - float(labelMat[j])
    alphaIold = alphas[i].copy(); alphaJold = alphas[j].copy();
    if (labelMat[i] != labelMat[j]):
     L = max(0, alphas[j] - alphas[i])
     H = min(C, C + alphas[j] - alphas[i])
    else:
     L = max(0, alphas[j] + alphas[i] - C)
     H = min(C, alphas[j] + alphas[i])
    if L==H: print "L==H"; continue
    eta = 2.0 * dataMatrix[i,:]*dataMatrix[j,:].T - dataMatrix[i,:]*dataMatrix[i,:].T - dataMatrix[j,:]*dataMatrix[j,:].T
    if eta >= 0: print "eta>=0"; continue
    alphas[j] -= labelMat[j]*(Ei - Ej)/eta
    alphas[j] = clipAlpha(alphas[j],H,L)
    if (abs(alphas[j] - alphaJold) < 0.00001): print "j not moving enough"; continue
    alphas[i] += labelMat[j]*labelMat[i]*(alphaJold - alphas[j])#update i by the same amount as j
                  #the update is in the oppostie direction
    b1 = b - Ei- labelMat[i]*(alphas[i]-alphaIold)*dataMatrix[i,:]*dataMatrix[i,:].T - labelMat[j]*(alphas[j]-alphaJold)*dataMatrix[i,:]*dataMatrix[j,:].T
    b2 = b - Ej- labelMat[i]*(alphas[i]-alphaIold)*dataMatrix[i,:]*dataMatrix[j,:].T - labelMat[j]*(alphas[j]-alphaJold)*dataMatrix[j,:]*dataMatrix[j,:].T
    if (0 < alphas[i]) and (C > alphas[i]): b = b1
    elif (0 < alphas[j]) and (C > alphas[j]): b = b2
    else: b = (b1 + b2)/2.0
    alphaPairsChanged += 1
    print "iter: %d i:%d, pairs changed %d" % (iter,i,alphaPairsChanged)
  if (alphaPairsChanged == 0): iter += 1
  else: iter = 0
  print "iteration number: %d" % iter
 return b,alphas
 
def kernelTrans(X, A, kTup): #calc the kernel or transform data to a higher dimensional space
 m,n = shape(X)
 K = mat(zeros((m,1)))
 if kTup[0]=='lin': K = X * A.T #linear kernel
 elif kTup[0]=='rbf':
  for j in range(m):
   deltaRow = X[j,:] - A
   K[j] = deltaRow*deltaRow.T
  K = exp(K/(-1*kTup[1]**2)) #divide in NumPy is element-wise not matrix like Matlab
 else: raise NameError('Houston We Have a Problem -- \
 That Kernel is not recognized')
 return K
 
class optStruct:
 def __init__(self,dataMatIn, classLabels, C, toler, kTup): # Initialize the structure with the parameters
  self.X = dataMatIn
  self.labelMat = classLabels
  self.C = C
  self.tol = toler
  self.m = shape(dataMatIn)[0]
  self.alphas = mat(zeros((self.m,1)))
  self.b = 0
  self.eCache = mat(zeros((self.m,2))) #first column is valid flag
  self.K = mat(zeros((self.m,self.m)))
  for i in range(self.m):
   self.K[:,i] = kernelTrans(self.X, self.X[i,:], kTup)
   
def calcEk(oS, k):
 fXk = float(multiply(oS.alphas,oS.labelMat).T*oS.K[:,k] + oS.b)
 Ek = fXk - float(oS.labelMat[k])
 return Ek
   
def selectJ(i, oS, Ei):   #this is the second choice -heurstic, and calcs Ej
 maxK = -1; maxDeltaE = 0; Ej = 0
 oS.eCache[i] = [1,Ei] #set valid #choose the alpha that gives the maximum delta E
 validEcacheList = nonzero(oS.eCache[:,0].A)[0]
 if (len(validEcacheList)) > 1:
  for k in validEcacheList: #loop through valid Ecache values and find the one that maximizes delta E
   if k == i: continue #don't calc for i, waste of time
   Ek = calcEk(oS, k)
   deltaE = abs(Ei - Ek)
   if (deltaE > maxDeltaE):
    maxK = k; maxDeltaE = deltaE; Ej = Ek
  return maxK, Ej
 else: #in this case (first time around) we don't have any valid eCache values
  j = selectJrand(i, oS.m)
  Ej = calcEk(oS, j)
 return j, Ej
 
def updateEk(oS, k):#after any alpha has changed update the new value in the cache
 Ek = calcEk(oS, k)
 oS.eCache[k] = [1,Ek]
   
def innerL(i, oS):
 Ei = calcEk(oS, i)
 if ((oS.labelMat[i]*Ei < -oS.tol) and (oS.alphas[i] < oS.C)) or ((oS.labelMat[i]*Ei > oS.tol) and (oS.alphas[i] > 0)):
  j,Ej = selectJ(i, oS, Ei) #this has been changed from selectJrand
  alphaIold = oS.alphas[i].copy(); alphaJold = oS.alphas[j].copy();
  if (oS.labelMat[i] != oS.labelMat[j]):
   L = max(0, oS.alphas[j] - oS.alphas[i])
   H = min(oS.C, oS.C + oS.alphas[j] - oS.alphas[i])
  else:
   L = max(0, oS.alphas[j] + oS.alphas[i] - oS.C)
   H = min(oS.C, oS.alphas[j] + oS.alphas[i])
  if L==H: print "L==H"; return 0
  eta = 2.0 * oS.K[i,j] - oS.K[i,i] - oS.K[j,j] #changed for kernel
  if eta >= 0: print "eta>=0"; return 0
  oS.alphas[j] -= oS.labelMat[j]*(Ei - Ej)/eta
  oS.alphas[j] = clipAlpha(oS.alphas[j],H,L)
  updateEk(oS, j) #added this for the Ecache
  if (abs(oS.alphas[j] - alphaJold) < 0.00001): print "j not moving enough"; return 0
  oS.alphas[i] += oS.labelMat[j]*oS.labelMat[i]*(alphaJold - oS.alphas[j])#update i by the same amount as j
  updateEk(oS, i) #added this for the Ecache     #the update is in the oppostie direction
  b1 = oS.b - Ei- oS.labelMat[i]*(oS.alphas[i]-alphaIold)*oS.K[i,i] - oS.labelMat[j]*(oS.alphas[j]-alphaJold)*oS.K[i,j]
  b2 = oS.b - Ej- oS.labelMat[i]*(oS.alphas[i]-alphaIold)*oS.K[i,j]- oS.labelMat[j]*(oS.alphas[j]-alphaJold)*oS.K[j,j]
  if (0 < oS.alphas[i]) and (oS.C > oS.alphas[i]): oS.b = b1
  elif (0 < oS.alphas[j]) and (oS.C > oS.alphas[j]): oS.b = b2
  else: oS.b = (b1 + b2)/2.0
  return 1
 else: return 0
 
def smoP(dataMatIn, classLabels, C, toler, maxIter,kTup=('lin', 0)): #full Platt SMO
 oS = optStruct(mat(dataMatIn),mat(classLabels).transpose(),C,toler, kTup)
 iter = 0
 entireSet = True; alphaPairsChanged = 0
 while (iter < maxIter) and ((alphaPairsChanged > 0) or (entireSet)):
  alphaPairsChanged = 0
  if entireSet: #go over all
   for i in range(oS.m):  
    alphaPairsChanged += innerL(i,oS)
    print "fullSet, iter: %d i:%d, pairs changed %d" % (iter,i,alphaPairsChanged)
   iter += 1
  else:#go over non-bound (railed) alphas
   nonBoundIs = nonzero((oS.alphas.A > 0) * (oS.alphas.A < C))[0]
   for i in nonBoundIs:
    alphaPairsChanged += innerL(i,oS)
    print "non-bound, iter: %d i:%d, pairs changed %d" % (iter,i,alphaPairsChanged)
   iter += 1
  if entireSet: entireSet = False #toggle entire set loop
  elif (alphaPairsChanged == 0): entireSet = True
  print "iteration number: %d" % iter
 return oS.b,oS.alphas
 
def calcWs(alphas,dataArr,classLabels):
 X = mat(dataArr); labelMat = mat(classLabels).transpose()
 m,n = shape(X)
 w = zeros((n,1))
 for i in range(m):
  w += multiply(alphas[i]*labelMat[i],X[i,:].T)
 return w
 
def testRbf(k1=1.3):
 dataArr,labelArr = loadDataSet('testSetRBF.txt')
 b,alphas = smoP(dataArr, labelArr, 200, 0.0001, 10000, ('rbf', k1)) #C=200 important
 datMat=mat(dataArr); labelMat = mat(labelArr).transpose()
 svInd=nonzero(alphas.A>0)[0]
 sVs=datMat[svInd] #get matrix of only support vectors
 labelSV = labelMat[svInd];
 print "there are %d Support Vectors" % shape(sVs)[0]
 m,n = shape(datMat)
 errorCount = 0
 for i in range(m):
  kernelEval = kernelTrans(sVs,datMat[i,:],('rbf', k1))
  predict=kernelEval.T * multiply(labelSV,alphas[svInd]) + b
  if sign(predict)!=sign(labelArr[i]): errorCount += 1
 print "the training error rate is: %f" % (float(errorCount)/m)
 dataArr,labelArr = loadDataSet('testSetRBF2.txt')
 errorCount = 0
 datMat=mat(dataArr); labelMat = mat(labelArr).transpose()
 m,n = shape(datMat)
 for i in range(m):
  kernelEval = kernelTrans(sVs,datMat[i,:],('rbf', k1))
  predict=kernelEval.T * multiply(labelSV,alphas[svInd]) + b
  if sign(predict)!=sign(labelArr[i]): errorCount += 1
 print "the test error rate is: %f" % (float(errorCount)/m) 
  
def img2vector(filename):
 returnVect = zeros((1,1024))
 fr = open(filename)
 for i in range(32):
  lineStr = fr.readline()
  for j in range(32):
   returnVect[0,32*i+j] = int(lineStr[j])
 return returnVect
 
def loadImages(dirName):
 from os import listdir
 hwLabels = []
 trainingFileList = listdir(dirName)   #load the training set
 m = len(trainingFileList)
 trainingMat = zeros((m,1024))
 for i in range(m):
  fileNameStr = trainingFileList[i]
  fileStr = fileNameStr.split('.')[0#take off .txt
  classNumStr = int(fileStr.split('_')[0])
  if classNumStr == 9: hwLabels.append(-1)
  else: hwLabels.append(1)
  trainingMat[i,:] = img2vector('%s/%s' % (dirName, fileNameStr))
 return trainingMat, hwLabels 
 
def testDigits(kTup=('rbf', 10)):
 dataArr,labelArr = loadImages('trainingDigits')
 b,alphas = smoP(dataArr, labelArr, 200, 0.0001, 10000, kTup)
 datMat=mat(dataArr); labelMat = mat(labelArr).transpose()
 svInd=nonzero(alphas.A>0)[0]
 sVs=datMat[svInd]
 labelSV = labelMat[svInd];
 print "there are %d Support Vectors" % shape(sVs)[0]
 m,n = shape(datMat)
 errorCount = 0
 for i in range(m):
  kernelEval = kernelTrans(sVs,datMat[i,:],kTup)
  predict=kernelEval.T * multiply(labelSV,alphas[svInd]) + b
  if sign(predict)!=sign(labelArr[i]): errorCount += 1
 print "the training error rate is: %f" % (float(errorCount)/m)
 dataArr,labelArr = loadImages('testDigits')
 errorCount = 0
 datMat=mat(dataArr); labelMat = mat(labelArr).transpose()
 m,n = shape(datMat)
 for i in range(m):
  kernelEval = kernelTrans(sVs,datMat[i,:],kTup)
  predict=kernelEval.T * multiply(labelSV,alphas[svInd]) + b
  if sign(predict)!=sign(labelArr[i]): errorCount += 1
 print "the test error rate is: %f" % (float(errorCount)/m)
 
 
'''''#######********************************
Non-Kernel VErsions below
'''#######********************************
 
class optStructK:
 def __init__(self,dataMatIn, classLabels, C, toler): # Initialize the structure with the parameters
  self.X = dataMatIn
  self.labelMat = classLabels
  self.C = C
  self.tol = toler
  self.m = shape(dataMatIn)[0]
  self.alphas = mat(zeros((self.m,1)))
  self.b = 0
  self.eCache = mat(zeros((self.m,2))) #first column is valid flag
   
def calcEkK(oS, k):
 fXk = float(multiply(oS.alphas,oS.labelMat).T*(oS.X*oS.X[k,:].T)) + oS.b
 Ek = fXk - float(oS.labelMat[k])
 return Ek
   
def selectJK(i, oS, Ei):   #this is the second choice -heurstic, and calcs Ej
 maxK = -1; maxDeltaE = 0; Ej = 0
 oS.eCache[i] = [1,Ei] #set valid #choose the alpha that gives the maximum delta E
 validEcacheList = nonzero(oS.eCache[:,0].A)[0]
 if (len(validEcacheList)) > 1:
  for k in validEcacheList: #loop through valid Ecache values and find the one that maximizes delta E
   if k == i: continue #don't calc for i, waste of time
   Ek = calcEk(oS, k)
   deltaE = abs(Ei - Ek)
   if (deltaE > maxDeltaE):
    maxK = k; maxDeltaE = deltaE; Ej = Ek
  return maxK, Ej
 else: #in this case (first time around) we don't have any valid eCache values
  j = selectJrand(i, oS.m)
  Ej = calcEk(oS, j)
 return j, Ej
 
def updateEkK(oS, k):#after any alpha has changed update the new value in the cache
 Ek = calcEk(oS, k)
 oS.eCache[k] = [1,Ek]
   
def innerLK(i, oS):
 Ei = calcEk(oS, i)
 if ((oS.labelMat[i]*Ei < -oS.tol) and (oS.alphas[i] < oS.C)) or ((oS.labelMat[i]*Ei > oS.tol) and (oS.alphas[i] > 0)):
  j,Ej = selectJ(i, oS, Ei) #this has been changed from selectJrand
  alphaIold = oS.alphas[i].copy(); alphaJold = oS.alphas[j].copy();
  if (oS.labelMat[i] != oS.labelMat[j]):
   L = max(0, oS.alphas[j] - oS.alphas[i])
   H = min(oS.C, oS.C + oS.alphas[j] - oS.alphas[i])
  else:
   L = max(0, oS.alphas[j] + oS.alphas[i] - oS.C)
   H = min(oS.C, oS.alphas[j] + oS.alphas[i])
  if L==H: print "L==H"; return 0
  eta = 2.0 * oS.X[i,:]*oS.X[j,:].T - oS.X[i,:]*oS.X[i,:].T - oS.X[j,:]*oS.X[j,:].T
  if eta >= 0: print "eta>=0"; return 0
  oS.alphas[j] -= oS.labelMat[j]*(Ei - Ej)/eta
  oS.alphas[j] = clipAlpha(oS.alphas[j],H,L)
  updateEk(oS, j) #added this for the Ecache
  if (abs(oS.alphas[j] - alphaJold) < 0.00001): print "j not moving enough"; return 0
  oS.alphas[i] += oS.labelMat[j]*oS.labelMat[i]*(alphaJold - oS.alphas[j])#update i by the same amount as j
  updateEk(oS, i) #added this for the Ecache     #the update is in the oppostie direction
  b1 = oS.b - Ei- oS.labelMat[i]*(oS.alphas[i]-alphaIold)*oS.X[i,:]*oS.X[i,:].T - oS.labelMat[j]*(oS.alphas[j]-alphaJold)*oS.X[i,:]*oS.X[j,:].T
  b2 = oS.b - Ej- oS.labelMat[i]*(oS.alphas[i]-alphaIold)*oS.X[i,:]*oS.X[j,:].T - oS.labelMat[j]*(oS.alphas[j]-alphaJold)*oS.X[j,:]*oS.X[j,:].T
  if (0 < oS.alphas[i]) and (oS.C > oS.alphas[i]): oS.b = b1
  elif (0 < oS.alphas[j]) and (oS.C > oS.alphas[j]): oS.b = b2
  else: oS.b = (b1 + b2)/2.0
  return 1
 else: return 0
 
def smoPK(dataMatIn, classLabels, C, toler, maxIter): #full Platt SMO
 oS = optStruct(mat(dataMatIn),mat(classLabels).transpose(),C,toler)
 iter = 0
 entireSet = True; alphaPairsChanged = 0
 while (iter < maxIter) and ((alphaPairsChanged > 0) or (entireSet)):
  alphaPairsChanged = 0
  if entireSet: #go over all
   for i in range(oS.m):  
    alphaPairsChanged += innerL(i,oS)
    print "fullSet, iter: %d i:%d, pairs changed %d" % (iter,i,alphaPairsChanged)
   iter += 1
  else:#go over non-bound (railed) alphas
   nonBoundIs = nonzero((oS.alphas.A > 0) * (oS.alphas.A < C))[0]
   for i in nonBoundIs:
    alphaPairsChanged += innerL(i,oS)
    print "non-bound, iter: %d i:%d, pairs changed %d" % (iter,i,alphaPairsChanged)
   iter += 1
  if entireSet: entireSet = False #toggle entire set loop
  elif (alphaPairsChanged == 0): entireSet = True
  print "iteration number: %d" % iter
 return oS.b,oS.alphas

運行結果如(圖八)所示:

python機器學習理論與實戰(六)支持向量機

(圖八)

上面代碼有興趣的可以讀讀,用的話,建議使用libsvm。

參考文獻:

    [1]machine learning in action. PeterHarrington

    [2] pattern recognition and machinelearning. Christopher M. Bishop

    [3]machine learning.Andrew Ng

以上就是本文的全部內容,希望對大家的學習有所幫助,也希望大家多多支持服務器之家。

原文鏈接:http://blog.csdn.net/marvin521/article/details/9305497

延伸 · 閱讀

精彩推薦
主站蜘蛛池模板: 亚洲va精品中文字幕 | 欧美一区二区三区在线观看不卡 | 亚欧精品在线观看 | chinaese中国女人厕所小便 | 亚洲国产福利精品一区二区 | 9 1 视频在线 | 纲手被漫画aⅴ | kkkk4444在线看片 | 91.prom在线观看国产 | 国产欧美国产综合第一区 | 成人嗯啊视频在线观看 | 免费一级特黄特色大片在线观看 | 免费91麻豆精品国产自产在线观看 | 成年视频在线观看 | 久久水蜜桃亚洲AV无码精品偷窥 | 亚洲精品色婷婷在线影院麻豆 | 国产成人福利免费观看 | 日韩一区二区不卡 | 无颜之月5集全免费看无删除 | 色综合九九 | 精品欧美一区二区在线观看欧美熟 | 久青草国产在线观看视频 | 扒开女人下面 | 欧美高清在线不卡免费观看 | 2018高清国产一道国产 | 成人福利 | 国产精品福利在线观看免费不卡 | 亚洲人成激情在线播放 | 波多野结衣护士 | 成人资源影音先锋久久资源网 | 天天做天天爱天天爽综合网 | 人生路不在线观看完整版 | 丰满大乳欲妇三级k8 | 黑人巨大初黑人解禁作品 | 好湿好紧太硬了我好爽 | 国产在线xvideos| 美女模特被c免费视频 | 午夜精品免费 | 国产美女做爰免费视频软件 | 大象传媒免费网址 | 亚洲精品视 |