Commit d9a9cad2 by 20200519065

Initial commit

parents
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 第一次小作业: 机器学习中的优化\n",
"本次作业主要用来练习逻辑回归相关的优化问题。通过完成作业,你讲会学到: 1. 逻辑回归的梯度下降法推导 2. 如何判断逻辑回归目标函数为凸函数。 "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"假设我们有训练数据$D=\\{(\\mathbf{x}_1,y_1),...,(\\mathbf{x}_n,y_n)\\}$, 其中$(\\mathbf{x}_i,y_i)$为每一个样本,而且$\\mathbf{x}_i$是样本的特征并且$\\mathbf{x}_i\\in \\mathcal{R}^D$, $y_i$代表样本数据的标签(label), 取值为$0$或者$1$. 在逻辑回归中,模型的参数为$(\\mathbf{w},b)$。对于向量,我们一般用粗体来表达。 为了后续推导的方便,可以把b融入到参数w中。 这是参数$w$就变成 $w=(w_0, w_1, .., w_D)$,也就是前面多出了一个项$w_0$, 可以看作是b,这时候每一个$x_i$也需要稍作改变可以写成 $x_i = [1, x_i]$,前面加了一个1。稍做思考应该能看出为什么可以这么写。\n",
"\n",
"请回答以下问题。请用Markdown自带的Latex来编写。\n",
"\n",
"\n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### (a) ```编写逻辑回归的目标函数```\n",
"请写出目标函数(objective function), 也就是我们需要\"最小化\"的目标(也称之为损失函数或者loss function),不需要考虑正则。 把目标函数表示成最小化的形态,另外把$\\prod_{}^{}$转换成$\\log \\sum_{}^{}$\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"$L(w)=$ $argmin_{w,b}-\\sum_{i=1}^n y_i\\log \\sigma(w^{T}x_i+b)+(1-y_i)\\log[1-\\sigma(w^{T}x_i+b)]$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### (b) ```求解对w的一阶导数```\n",
"为了做梯度下降法,我们需要对参数$w$求导,请把$L(w)$对$w$的梯度计算一下:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"$\\frac{\\partial L(w)}{\\partial w}=$ $\\sum_{i=1}^n[\\sigma(w^{T}x_i+b)-y_i]x_i$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### (c) ```求解对w的二阶导数```\n",
"在上面结果的基础上对$w$求解二阶导数,也就是再求一次导数。 这个过程需要回忆一下线性代数的部分 ^^。 参考: matrix cookbook: https://www.math.uwaterloo.ca/~hwolkowi/matrixcookbook.pdf, 还有 Hessian Matrix。 "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"$\\frac{\\partial^2 L(w)}{\\partial^2 w}=$ $\\sum_{i=1}^n \\frac{x_{i,k}\\cdot x_{i,j}\\cdot e^{-w^{T}x_i}}{(1+e^{-w^{T}x_i})^2} = \\sum_{i=1}^n x_{i,k}\\cdot x_{i,j}\\cdot \\sigma_i(1-\\sigma_i)$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### (d) ```证明逻辑回归目标函数是凸函数```\n",
"试着证明逻辑回归函数是凸函数。假设一个函数是凸函数,我们则可以得出局部最优解即为全局最优解,所以假设我们通过随机梯度下降法等手段找到最优解时我们就可以确认这个解就是全局最优解。证明凸函数的方法有很多种,在这里我们介绍一种方法,就是基于二次求导大于等于0。比如给定一个函数$f(x)=x^2-3x+3$,做两次\n",
"求导之后即可以得出$f''(x)=2 > 0$,所以这个函数就是凸函数。类似的,这种理论也应用于多元变量中的函数上。在多元函数上,只要证明二阶导数是posititive semidefinite即可以。 问题(c)的结果是一个矩阵。 为了证明这个矩阵(假设为H)为Positive Semidefinite,需要证明对于任意一个非零向量$v\\in \\mathcal{R}$, 需要得出$v^{T}Hv >=0$\n",
"请写出详细的推导过程:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"// TODO 请写下推导过程\n",
"\n",
"\n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Since $H_{i,j} = \\sum_{i=1}^n x_{i,k}\\cdot x_{i,j}\\cdot \\sigma_i(1-\\sigma_i)$ \n",
"Let X= $\\left[\\begin{array}{cccc}\n",
"x_{1,0} & x_{1,1} & \\dots & x_{1, D} \\\\\n",
"x_{2,0} & x_{2,1} & \\dots & x_{2, D} \\\\\n",
"\\dots & \\\\\n",
"x_{n, D} & x_{n, 1} & \\dots & x_{n, D}\n",
"\\end{array}\\right]$, $A = \\left[\\begin{array}{cccc}\n",
"\\sigma_{1}\\left(1-\\sigma_{1}\\right) & 0 & \\dots & 0 \\\\\n",
"0 & \\sigma_{2}\\left(1-\\sigma_{2}\\right) & \\dots & 0 \\\\\n",
"\\dots & & \\\\\n",
"0 & 0 & \\dots & \\sigma_{n}\\left(1-\\sigma_{n}\\right)\n",
"\\end{array}\\right]$, \n",
"then we have $H = X^{T}\\cdot A \\cdot X$\n",
"It follows that $v^{T}Hv = v^{T}X^{T}AXv = (Xv)^{T}A(Xv)$. \n",
"Let $P = Xv$, then we have $H = P^{T}AP$. \n",
"Since $\\sigma_{i}=\\frac{1}{1+e^{-\\mathbf{w}^{\\mathrm{T}} \\cdot \\mathbf{x}_{\\mathbf{i}}}}$, $A$ is obviously positive. Hence, matrix H is positive semidefinite. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.6"
},
"toc": {
"base_numbering": 1,
"nav_menu": {},
"number_sections": true,
"sideBar": true,
"skip_h1_title": false,
"title_cell": "Table of Contents",
"title_sidebar": "Contents",
"toc_cell": false,
"toc_position": {},
"toc_section_display": true,
"toc_window_display": false
}
},
"nbformat": 4,
"nbformat_minor": 2
}
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 第一次小作业: 机器学习中的优化\n",
"本次作业主要用来练习逻辑回归相关的优化问题。通过完成作业,你讲会学到: 1. 逻辑回归的梯度下降法推导 2. 如何判断逻辑回归目标函数为凸函数。 "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"假设我们有训练数据$D=\\{(\\mathbf{x}_1,y_1),...,(\\mathbf{x}_n,y_n)\\}$, 其中$(\\mathbf{x}_i,y_i)$为每一个样本,而且$\\mathbf{x}_i$是样本的特征并且$\\mathbf{x}_i\\in \\mathcal{R}^D$, $y_i$代表样本数据的标签(label), 取值为$0$或者$1$. 在逻辑回归中,模型的参数为$(\\mathbf{w},b)$。对于向量,我们一般用粗体来表达。 为了后续推导的方便,可以把b融入到参数w中。 这是参数$w$就变成 $w=(w_0, w_1, .., w_D)$,也就是前面多出了一个项$w_0$, 可以看作是b,这时候每一个$x_i$也需要稍作改变可以写成 $x_i = [1, x_i]$,前面加了一个1。稍做思考应该能看出为什么可以这么写。\n",
"\n",
"请回答以下问题。请用Markdown自带的Latex来编写。\n",
"\n",
"\n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### (a) ```编写逻辑回归的目标函数```\n",
"请写出目标函数(objective function), 也就是我们需要\"最小化\"的目标(也称之为损失函数或者loss function),不需要考虑正则。 把目标函数表示成最小化的形态,另外把$\\prod_{}^{}$转换成$\\log \\sum_{}^{}$\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"$L(w)=$ $argmin_{w,b}-\\sum_{i=1}^n y_i\\log \\sigma(w^{T}x_i+b)+(1-y_i)\\log[1-\\sigma(w^{T}x_i+b)]$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### (b) ```求解对w的一阶导数```\n",
"为了做梯度下降法,我们需要对参数$w$求导,请把$L(w)$对$w$的梯度计算一下:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"$\\frac{\\partial L(w)}{\\partial w}=$ $\\sum_{i=1}^n[\\sigma(w^{T}x_i+b)-y_i]x_i$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### (c) ```求解对w的二阶导数```\n",
"在上面结果的基础上对$w$求解二阶导数,也就是再求一次导数。 这个过程需要回忆一下线性代数的部分 ^^。 参考: matrix cookbook: https://www.math.uwaterloo.ca/~hwolkowi/matrixcookbook.pdf, 还有 Hessian Matrix。 "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"$\\frac{\\partial^2 L(w)}{\\partial^2 w}=$ $\\sum_{i=1}^n \\frac{x_{i,k}\\cdot x_{i,j}\\cdot e^{-w^{T}x_i}}{(1+e^{-w^{T}x_i})^2} = \\sum_{i=1}^n x_{i,k}\\cdot x_{i,j}\\cdot \\sigma_i(1-\\sigma_i)$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### (d) ```证明逻辑回归目标函数是凸函数```\n",
"试着证明逻辑回归函数是凸函数。假设一个函数是凸函数,我们则可以得出局部最优解即为全局最优解,所以假设我们通过随机梯度下降法等手段找到最优解时我们就可以确认这个解就是全局最优解。证明凸函数的方法有很多种,在这里我们介绍一种方法,就是基于二次求导大于等于0。比如给定一个函数$f(x)=x^2-3x+3$,做两次\n",
"求导之后即可以得出$f''(x)=2 > 0$,所以这个函数就是凸函数。类似的,这种理论也应用于多元变量中的函数上。在多元函数上,只要证明二阶导数是posititive semidefinite即可以。 问题(c)的结果是一个矩阵。 为了证明这个矩阵(假设为H)为Positive Semidefinite,需要证明对于任意一个非零向量$v\\in \\mathcal{R}$, 需要得出$v^{T}Hv >=0$\n",
"请写出详细的推导过程:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"// TODO 请写下推导过程\n",
"\n",
"\n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Since $H_{i,j} = \\sum_{i=1}^n x_{i,k}\\cdot x_{i,j}\\cdot \\sigma_i(1-\\sigma_i)$ \n",
"Let X= $\\left[\\begin{array}{cccc}\n",
"x_{1,0} & x_{1,1} & \\dots & x_{1, D} \\\\\n",
"x_{2,0} & x_{2,1} & \\dots & x_{2, D} \\\\\n",
"\\dots & \\\\\n",
"x_{n, D} & x_{n, 1} & \\dots & x_{n, D}\n",
"\\end{array}\\right]$, $A = \\left[\\begin{array}{cccc}\n",
"\\sigma_{1}\\left(1-\\sigma_{1}\\right) & 0 & \\dots & 0 \\\\\n",
"0 & \\sigma_{2}\\left(1-\\sigma_{2}\\right) & \\dots & 0 \\\\\n",
"\\dots & & \\\\\n",
"0 & 0 & \\dots & \\sigma_{n}\\left(1-\\sigma_{n}\\right)\n",
"\\end{array}\\right]$, \n",
"then we have $H = X^{T}\\cdot A \\cdot X$\n",
"It follows that $v^{T}Hv = v^{T}X^{T}AXv = (Xv)^{T}A(Xv)$. \n",
"Let $P = Xv$, then we have $H = P^{T}AP$. \n",
"Since $\\sigma_{i}=\\frac{1}{1+e^{-\\mathbf{w}^{\\mathrm{T}} \\cdot \\mathbf{x}_{\\mathbf{i}}}}$, $A$ is obviously positive. Hence, matrix H is positive semidefinite. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.6"
},
"toc": {
"base_numbering": 1,
"nav_menu": {},
"number_sections": true,
"sideBar": true,
"skip_h1_title": false,
"title_cell": "Table of Contents",
"title_sidebar": "Contents",
"toc_cell": false,
"toc_position": {},
"toc_section_display": true,
"toc_window_display": false
}
},
"nbformat": 4,
"nbformat_minor": 2
}
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment