梯度下降法的算法过程

输入: 训练数据集T=(x1,y1),(x1,y1),,(xn,yn)T={(x_1, y_1), (x_1, y_1), \cdots, (x_n, y_n)},其中

xiRnyiy=1,+1,i=1,2,,n\begin{aligned} x_i \in R^n \\ y_i \in y = {-1, +1}, \\ i = 1, 2, \cdots, n \end{aligned}

学习率为η\eta 输出: a, b 感知机模型f(x)=sign(j=1majyjxjx+b)f(x) = sign(\sum_{j=1}^ma_jy_jx_j \cdot x + b) 过程: 1. 选取初值a0 = 0, b_0 = 0 2. 在训练集中选取数据(xi,yi)(x_i, y_i) 3. 如果$$y_i(\sum{j=1}^ma_jy_jx_j \cdot x_i + b) \le 0$$,则

{aiai+ηbb+ηyi\begin{cases} a_i \leftarrow a_i + \eta \\ b \leftarrow b + \eta y_i \end{cases}
  1. 转至2,直至训练集中没有错误分类点

对偶形式中训练数据仅以内积的形式出现。为了方便,可以预先将训练集中的特征向量的内积计算出来,并以矩阵的形式存储。这个矩阵就所谓的Gram矩阵(Gram matrix)

G=[xixj]m×mG = [x_i \cdot x_j]_{m \times m}

代码: https://github.com/windmissing/LiHang-TongJiXueXiFangFa/blob/master/Chapter2/perceptron-2.ipynb

def calcGramMaxtrix(X):
    m = X.shape[0]
    gram = np.zeros((m, m))
    for i in range(m):
        for j in range(i, m):
            gram[i, j] = X[i].dot(X[j])
            gram[j, i] = X[i].dot(X[j])
    return gram

def calcI(X, y, a, b, i, gram):
    #print (X.shape, y.shape, a. shape)
    sum = 0
    for j in range(X.shape[0]):
        sum += a[j] *y[j] * gram[j, i]
    return (sum + b)*y[i]

# 感知机原始形式
def perceptron(X, y, eta):
    a, b = np.zeros(X.shape[0]),0
    gram = calcGramMaxtrix(X)
    isFinished = False
    while not isFinished:
        isFinished = True
        for i in range(X.shape[0]):
            if calcI(X, y, a, b, i, gram) <= 0:
                isFinished = False
                a[i] += eta
                b += eta * y[i]
    def f(x):
        sum = 0
        for j in range(X.shape[0]):
            sum += a[j] *y[j] * X[j].dot(x)
        return sum + b
    return a, b, f

Last updated