Commit 8b81d830 by 20200519016

add homework2

parent c07c1c17
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 线性规划与Word Mover's Distance"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"WMD在文本分析领域算作是一个比较经典的算法,它可以用来计算两个文本之间的相似度。 比如问答系统中,可以判断一个用户的query跟哪一个知识库里的问题最相近。而且,计算两个文本之间的相似度这个问题是NLP的核心,这也是为什么文本相似度计算这么重要的原因。 \n",
"\n",
"背景: 在文本相似度匹配问题上如果使用tf-idf等模型,那这时候假如两个文本中没有出现共同的单词,则计算出来的相似度为0,但我们知道实际上很多时候单词可能不一样,但表示的内容确是类似的。 比如 ”People like this car“, \"Those guys enjoy driving that\", 虽然没有任何一样的单词,意思确是类似的。 这是WMD算法提出来的初衷。\n",
"\n",
"WMD作为文本相似度计算的一种方法,最早由Matt J. Kusner, Yu Sun, Nicholas I. Kolkin, Kilian Q. Weinberger等人提出。但实际上它的想法极其简单,可以认为是Transportation Problem用在了词向量上, 其核心是线性规划。 对于Transportation问题在课上已经讲过,仍不清楚的朋友可以回顾一下课程的内容。 \n",
"\n",
"在Section B里我们需要做两件事情: 1. 实现WMD算法来计算两个字符串之间的距离。 2. WMD的拓展方案"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 1. WMD算法的实现\n",
"具体算法的实现是基于线性规划问题,细节请参考WMD的论文。 核心思想是把第一个句子转换成第二个句子过程中需要花费的最小cost。 \n",
"\n",
"<img src=\"picture1.png\" alt=\"drawing\" width=\"600\"/>\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"线性规划问题即可以写成如下形式:\n",
"\n",
"<img src=\"picture2.png\" alt=\"drawing\" width=\"500\"/>\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"这里的参数是$T_{ij}$, 需要通过LP Solver去解决。$c(i,j)$指的是两个单词之间的距离, $c_{i,j}=||x_i-x_j||_2$。 参考: $||x||_2=\\sqrt{x_1^2+...+x_d^2}$\n",
"\n",
"为了实现WMD算法,首先需要词向量。 在这里,我们就不自己去训练了,直接使用已经训练好的词向量。 \n",
"请下载训练好的Glove向量:https://nlp.stanford.edu/projects/glove/, 下载其中的 glove.6B.zip, 并使用d=100维的向量。 由于文件较大,需要一些时间来下载。 \n",
"\n",
"请注意:提交作业时不要上传此文件, 但文件路径请使用我们给定的路径,不要改变。 "
]
},
{
"cell_type": "code",
"execution_count": 128,
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"import math\n",
"import numpy as np\n",
"from cvxopt import matrix, solvers\n",
"\n",
"# 读取Glove文件。 注意: 不要试图修改文件以及路径\n",
"glovefile = open(\"glove.6B.100d.txt\",\"r\",encoding=\"utf-8\") \n",
"\n",
"def get_word_vec(glovefile, limit=None):\n",
" cnt = 0\n",
" word_vec = dict()\n",
" for line in glovefile:\n",
" word = line.split()[0]\n",
" vec = line.split()[1:]\n",
" if word not in word_vec.keys():\n",
" word_vec[word] = [] \n",
" word_vec[word] = vec\n",
" \n",
" cnt = cnt + 1\n",
" if limit != None and cnt > limit:\n",
" break\n",
" return word_vec\n",
"\n",
"# get_word_vec(glovefile, 20)\n",
"\n",
"\n",
"def get_word_frequency(sent1):\n",
" word_cnt = dict()\n",
" all_cnt = 0\n",
" for word in sent1.split():\n",
" all_cnt += 1\n",
" if word not in word_cnt.keys():\n",
" word_cnt[word] = 0\n",
" word_cnt[word] += 1\n",
"\n",
" return (word_cnt, all_cnt)\n",
"\n",
"def get_word_distance(wordvec1, wordvec2):\n",
" c_index = []\n",
" c_array = np.zeros((len(wordvec1.keys()), len(wordvec2.keys())))\n",
"# print(c_array)\n",
" i = 0 \n",
" for word1,vec1 in wordvec1.items():\n",
" c_index.append(word1)\n",
" j = 0\n",
" c_col = []\n",
" for word2,vec2 in wordvec2.items():\n",
" c_col.append(word2)\n",
" v_pow = 0\n",
" for cnt in range(0, len(vec1)):\n",
" v_pow += math.pow(float(vec1[cnt])-float(vec2[cnt]),2)\n",
" c_array[i][j] = v_pow ** 0.5\n",
" j += 1\n",
" i += 1\n",
"# print(\"c_array:\",c_array.reshape(-1))\n",
" \n",
" return c_array.reshape(-1)\n",
"\n",
"# word_vec1 = {'a':[1,2,3], 'c':[2,3,4]} \n",
"# word_vec2 = {'b':[3,4,5], 'c':[2,3,4]}\n",
"\n",
"# print(get_word_distance(word_vec1, word_vec2))\n",
" \n",
"\n",
"# TODO: 编写WMD函数来计算两个句子之间的相似度\n",
"def WMD (sent1, sent2):\n",
" \"\"\"\n",
" 这是主要的函数模块。参数sent1是第一个句子, 参数sent2是第二个句子,可以认为没有经过分词。\n",
" 在英文里,用空格作为分词符号。\n",
" \n",
" 在实现WMD算法的时候,需要用到LP Solver用来解决Transportation proboem. 请使用http://cvxopt.org/examples/tutorial/lp.html\n",
" 也可以参考blog: https://scaron.info/blog/linear-programming-in-python-with-cvxopt.html\n",
" \n",
" 需要做的事情为:\n",
" \n",
" 1. 对句子做分词: 调用 .split() 函数即可\n",
" 2. 获取每个单词的词向量。这需要读取文件之后构建embedding matrix. \n",
" 3. 构建lp问题,并用solver解决\n",
" \n",
" 可以自行定义其他的函数,但务必不要改写WMD函数名。测试时保证WMD函数能够正确运行。\n",
" \"\"\"\n",
" wmd_dist = []\n",
" \n",
" # 2.句子做分词处理\n",
" word_list1 = sent1.split()\n",
" word_list2 = sent2.split()\n",
" \n",
" # 3.获取每个词的词向量\n",
" wordvec1 = dict()\n",
" wordvec2 = dict()\n",
" for w in word_list1:\n",
" if w not in wordvec1.keys():\n",
" if w in wordvec_dict.keys():\n",
" wordvec1[w] = wordvec_dict[w]\n",
" else:\n",
" print(\"word:%s not in dict\" % w)\n",
" \n",
" for w in word_list2:\n",
" if w not in wordvec2.keys():\n",
" if w in wordvec_dict.keys():\n",
" wordvec2[w] = wordvec_dict[w]\n",
" else:\n",
" print(\"word:%s not in dict\" % w)\n",
" \n",
" # 4.构建lp问题\n",
" # 4.1计算2个单词的词向量距离矩阵c\n",
" c_matrix = get_word_distance(wordvec1, wordvec2)\n",
"# print(c_matrix)\n",
" \n",
" # 4.2 计算A\n",
" len1 = len(wordvec1.keys())\n",
" len2 = len(wordvec2.keys())\n",
" a_array = np.zeros((len1+len2,len1*len2))\n",
" for index in range(len1):\n",
" for col in range(len2):\n",
" a_array[index][len2*index + col] = 1\n",
" cnt = 0\n",
" for index in range(len2):\n",
" for col in range(len1*len2):\n",
" if col%len2 == index:\n",
" a_array[len1 + index][col] = 1\n",
" a_matrix = matrix(a_array)\n",
" \n",
" # 4.3计算d\n",
" word_cnt1,all_cnt1 = get_word_frequency(sent1)\n",
" word_cnt2,all_cnt2 = get_word_frequency(sent2)\n",
" \n",
" d_ij = []\n",
" for word in word_cnt1.keys():\n",
" if word in wordvec1.keys():\n",
" d_ij.append(float(word_cnt1[word])/all_cnt1)\n",
" \n",
" for word in word_cnt2.keys():\n",
" if word in wordvec2.keys():\n",
" d_ij.append(float(word_cnt2[word])/all_cnt2)\n",
" d_matrix = matrix(d_ij)\n",
" \n",
" \n",
" # 4.4求解\n",
" A = a_matrix\n",
" b = d_matrix\n",
" c = matrix(c_matrix)\n",
"# print(A)\n",
"# print(b)\n",
"# print(c)\n",
"# 该题无不等式约束条件\n",
" num_of_T = len(c_matrix)\n",
" G = matrix(-np.eye(num_of_T))\n",
" h = matrix(np.zeros(num_of_T))\n",
" sol = solvers.lp(c, G, h, A=A, b=b, solver='glpk')\n",
" wmd_dist = sol['primal objective']\n",
" return wmd_dist\n",
" "
]
},
{
"cell_type": "code",
"execution_count": 126,
"metadata": {
"scrolled": true
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"4.271249445939769\n",
"1.2846638676464923\n",
"word:I not in dict\n",
"word:I not in dict\n",
"1.9662445521003484\n",
"1.2488114671609447\n"
]
}
],
"source": [
"## TODO: 自己写至少4个Test cases来测试一下。 比如 print (WMD(\"people like this car\", \"those guys enjoy driving that\"))\n",
"## \n",
"# 1.获取词典\n",
"wordvec_dict = get_word_vec(glovefile)\n",
"\n",
"print(WMD(\"people like this car\", \"those guys enjoy driving that\"))\n",
"print(WMD(\"people like this car\", \"people enjoy this car\"))\n",
"print(WMD(\"I love beijing\", \"I love shandong\"))\n",
"print(WMD(\"the boy like dog\", \"the girl like cat\"))\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 2. WMD算法的拓展\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"##### 2.1 从欧式距离到Mahalanobis距离\n",
"WMD算法本身不需要任何的标注好的数据,所以它属于无监督学习。 而且在上述的WMD算法里使用的是欧式距离,$c(i,j)=||x_i-x_j||_2$, 那这种距离有什么缺点呢? 其中一个缺点是欧式距离的计算把一个空间里的每一个维度都看成了同样的权重,也就是每一个维度的重要性都是一致的,而且不同维度之间的相关性也没有考虑进来。如果想把这些信息考虑进来,我们则可以使用一个改进版的距离计算叫做Mahalanobis Distance, 距离计算变成 $c(i,j)=(x_i-x_j)^{\\top}M(x_i-x_j)$。\n",
"\n",
"这如何去理解呢? Mahalanobis distance可以理解成: 首先我们对原始空间里的样本做了一层线性的转换, 然后在转换后的空间里计算欧式距离。 我们把这个过程写一下: 原始空间里的点为 $x_i$, 然后我们定义一个转换矩阵 $L$, 这时候就可以得到 $||Lx_i - Lx_j||_2^2=||L(x_i-x_j)||_2^2=(L(x_i-x_j))^{\\top}L(x_i-x_j)=(x_i-x_j)^{\\top}L^{\\top}L(x_i-x_j)=(x_i-x_j)^{\\top}M(x_i-x_j)$, 相当于把$L^{\\top}L$看做是矩阵$M$。这时候很容易看出来矩阵$M$是PSD(positive semidefinite). \n",
"\n",
"假设我们定义了这种距离,这里的M如何选择呢? 当然,这是需要学出来的! 那为了学出M, 必须要有标注好的训练数据,也就需要监督学习场景! "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"##### 2.2 从无监督学习到监督学习\n",
"\n",
"假如拥有数据集$D={(s_1, y_1),...,(s_n, y_n)}$, 这里每一个$s_i$代表的是一个句子, $y_i$代表的是对应每一个句子的标签(label)。 我们希望使用这个数据来学出M的值。那如何学习呢? 在这个问题上能使用的方法其实比较多,但在这里, 我们采用一个margin-based方法,这一点在SVM里面其实接触过。\n",
"\n",
"具体一点,假如我们手里有三个句子,$s_u, s_v, s_w$, 其中$s_u$和$s_v$是属于同一个类别,$s_w$是属于另一个类别,那这时候从KNN的角度来讲,我们希望$s_u, s_v$的距离要小于 $s_u, s_w$之间的距离。 用数学来表示: $d(s_u, s_v) < d(s_u, s_w)$, $~~d(.,.)$表示两个文本之间的距离。 其实我们希望它们之间的距离越大越好,也就是所谓的完全区间越宽越好。 但实际上,这个距离太大也没有什么意义,所以我们就干脆指定一个参数 $\\eta$来表示margin, 也就是只要它俩之间的距离大于这个margin就可以。如果小于margin就给他们一些惩罚(penalty),这一点跟SVM极其相似(slack variable)。所以从这个角度SVM也叫做margin-based classifier. \n",
"\n",
"把上述的表示成数学的话: $d(s_u, s_v) + \\eta < d(s_u, s_k)$, 但如果这个式子不成立的话就可以认为产生了penalty。 所以这部分就可以表示成大家熟悉的hinge loss: $max (0, d(s_u, s_v) + \\eta - d(s_u, s_k))$。 另外,我们同时也希望如果两个样本属于同一个类别, 那它俩的距离也比较相近。所以目标函数可以分为两个部分: 1. 同类型的样本距离尽量要近 2. 不同类型的样本距离尽量远一些。 \n",
"\n",
"当我们把所有的样本以及他们之间的大小关系考虑进来之后就可以得到最终的目标函数。 \n",
"\n",
"\\begin{equation}\n",
"L = \\lambda \\sum_{u=1}^{n}\\sum_{v\\in pos(u)}d(s_u, s_v) + (1-\\lambda)\\sum_{u=1}^{n}\\sum_{v\\in pos(u)}^{}\\sum_{w\\in neg(u)}^{} max (0, d(s_u, s_v) + \\eta - d(s_u, s_w))\n",
"\\end{equation}\n",
"\n",
"这里几个notation: pos(u)代表的是跟样本u属于同一个类别的样本, neg(u)指的是跟样本u属于不同类别的样本。 注意:类别的个数可以大于2, 就是多分类问题。 你也可以参考: http://jmlr.org/papers/volume10/weinberger09a/weinberger09a.pdf"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"在这个式子里,第一部分代表的是让同一类型的样本的距离变小, 第二部分代表的是不同类型的样本之间要扩大距离。 \n",
"\n",
"- #### Q1: 这里$\\lambda$起到什么作用?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"// TODO: 你的答案....\n",
"\n",
"\n",
"#### 这里$\\lambda$起到了惩罚因子的作用,$\\lambda$等于1的时候重点优化同类样本距离越短越好,$\\lambda$等于0的时候则重点优化不同样本的距离越大越好,$\\lambda$在0到1之间来均衡2者的影响。\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"- #### Q2: 在目标函数里有$\\eta$值,这个值怎么理解? 如果去设定这个值呢?\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"// TODO: 你的答案 ....\n",
" \n",
"\n",
"#### 这里$\\eta$是松弛变量,该值越大则限制不同类别的距离更远。带标签的训练样本学习得到。\n",
" \n",
" \n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"这里的$d_{u,v}$指的是$s_u$和$s_v$之间的距离, 而且这个距离被定义为:\n",
"\n",
"\\begin{equation} d_{u, v}=min_{T\\geq 0}\\sum_{i,j}^{}T_{ij}c(i,j)^u~~~~ s.t. \\sum_{j=1}^{}T_{ij}=d_i^u, ~~\\sum_{i=1}^{}T_{ij}=d_j'^v\\end{equation}\n",
"\n",
"这里 $c(i,j)=(x_i-x_j)^{\\top}M(x_i-x_j)$。 所以是不是可以察觉到这个问题目标函数里既包含了参数$M$也包含了线性规划问题。\n",
"\n",
"- #### Q3: 请试着去理解上述所有的过程,并回答: 优化问题如何解决呢? 请给出解题的思路 (文字适当配合推导过程)。 "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"// TODO 你的答案.... \n",
"\\begin{equation}\n",
"L = \\lambda \\sum_{u=1}^{n}\\sum_{v\\in pos(u)}(min_{T\\geq 0}\\sum_{i,j}^{}T_{ij}c(i,j)^u) + (1-\\lambda)\\sum_{u=1}^{n}\\sum_{v\\in pos(u)}^{}\\sum_{w\\in neg(u)}^{} max (0, d(s_u, s_v) + \\eta - d(s_u, s_w))\n",
"s.t. \\sum_{j=1}^{}T_{ij}=d_i^u, ~~\\sum_{i=1}^{}T_{ij}=d_j'^v\n",
"\\end{equation}\n",
"\n",
"使用梯度下降法分别求解T和M\n",
"\n",
"\n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"对于上述问题,其实我们也可以采用不一样的损失函数来求解M。 一个常用的损失函数叫作 “kNN-LOO error”, 相当于把KNN的准确率转换成了smooth differential loss function. 感兴趣的朋友可以参考: https://papers.nips.cc/paper/6139-supervised-word-movers-distance.pdf\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"以上是优化部分的一个简短的作业,通过这些练习会对优化理论有更清晰的认知。 Good luck for everyone! "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment