# 梯度下降法的推导过程

感知机的损失函数：

$$
L(w, b) = - \sum\_{x\_i \in M}y\_i (w \cdot x\_i + b)
$$

目标是最小化这个损失函数。

使用梯度下降法求出$$L(w,b)$$的偏导，使w,b向导数的负方向移动。

$$
\begin{cases}
\nabla\_wL(w,b) = - \sum\_{x\_i \in M}y\_ix\_i \\
\nabla\_bL(w,b) = - \sum\_{x\_i \in M}y\_i && {2}
\end{cases}
$$

其中M是错误分类点的集合

由于perceptron使用随机梯度下降法，一次只基于一个点来调整w,b。\
假设当前选择的误分类点是$$(x\_i, y\_i)$$，那就相当集合M中只有$$(x\_i, y\_i)$$这一个点，偏导公式（2）可简化为

$$
\begin{cases}
\nabla\_wL(w,b) = - y\_ix\_i \\
\nabla\_bL(w,b) = - y\_i
\end{cases}
$$

令(w,b)向导数的负方向移动，学习率为$$\eta$$，得到

$$
\begin{cases}
w\_{new} = w\_{old} + \eta y\_ix\_i \\
b\_{new} = b\_{old} + \eta y\_i
\end{cases}
$$


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://windmising.gitbook.io/lihang-tongjixuexifangfa/perceptron/3.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
