Commit ed735c3d by 20200519027

homework 2

parent 75b7893b
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 线性规划与Word Mover's Distance"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"WMD在文本分析领域算作是一个比较经典的算法,它可以用来计算两个文本之间的相似度。 比如问答系统中,可以判断一个用户的query跟哪一个知识库里的问题最相近。而且,计算两个文本之间的相似度这个问题是NLP的核心,这也是为什么文本相似度计算这么重要的原因。 \n",
"\n",
"背景: 在文本相似度匹配问题上如果使用tf-idf等模型,那这时候假如两个文本中没有出现共同的单词,则计算出来的相似度为0,但我们知道实际上很多时候单词可能不一样,但表示的内容确是类似的。 比如 ”People like this car“, \"Those guys enjoy driving that\", 虽然没有任何一样的单词,意思确是类似的。 这是WMD算法提出来的初衷。\n",
"\n",
"WMD作为文本相似度计算的一种方法,最早由Matt J. Kusner, Yu Sun, Nicholas I. Kolkin, Kilian Q. Weinberger等人提出。但实际上它的想法极其简单,可以认为是Transportation Problem用在了词向量上, 其核心是线性规划。 对于Transportation问题在课上已经讲过,仍不清楚的朋友可以回顾一下课程的内容。 \n",
"\n",
"在Section B里我们需要做两件事情: 1. 实现WMD算法来计算两个字符串之间的距离。 2. WMD的拓展方案"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 1. WMD算法的实现\n",
"具体算法的实现是基于线性规划问题,细节请参考WMD的论文。 核心思想是把第一个句子转换成第二个句子过程中需要花费的最小cost。 \n",
"\n",
"<img src=\"picture1.png\" alt=\"drawing\" width=\"600\"/>\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"线性规划问题即可以写成如下形式:\n",
"\n",
"<img src=\"picture2.png\" alt=\"drawing\" width=\"500\"/>\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"这里的参数是$T_{ij}$, 需要通过LP Solver去解决。$c(i,j)$指的是两个单词之间的距离, $c_{i,j}=||x_i-x_j||_2$。 参考: $||x||_2=\\sqrt{x_1^2+...+x_d^2}$\n",
"\n",
"为了实现WMD算法,首先需要词向量。 在这里,我们就不自己去训练了,直接使用已经训练好的词向量。 \n",
"请下载训练好的Glove向量:https://nlp.stanford.edu/projects/glove/, 下载其中的 glove.6B.zip, 并使用d=100维的向量。 由于文件较大,需要一些时间来下载。 \n",
"\n",
"请注意:提交作业时不要上传此文件, 但文件路径请使用我们给定的路径,不要改变。 "
]
},
{
"cell_type": "code",
"execution_count": 69,
"metadata": {},
"outputs": [],
"source": [
"# 读取Glove文件。 注意: 不要试图修改文件以及路径\n",
"import numpy as np\n",
"\n",
"glovefile = open(\"glove.6B.100d.txt\",\"r\",encoding=\"utf-8\") \n",
"embedding_matrix = {} #key:单词 val:矢量\n",
"\n",
"\n",
"for line in glovefile:\n",
" word, vec = line.split(maxsplit=1)\n",
" vec = np.fromstring(vec, 'float', sep = ' ')\n",
" embedding_matrix[word] = vec\n"
]
},
{
"cell_type": "code",
"execution_count": 78,
"metadata": {},
"outputs": [],
"source": [
"def word_split(sent):\n",
" \n",
" import string\n",
" words = sent.split()\n",
" \n",
" table = str.maketrans('', '', string.punctuation)\n",
" stripped = [w.translate(table).lower() for w in words]\n",
" \n",
" return stripped"
]
},
{
"cell_type": "code",
"execution_count": 79,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['this', 'is', 'odd']"
]
},
"execution_count": 79,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"word_split(\"This is odd.\")"
]
},
{
"cell_type": "code",
"execution_count": 72,
"metadata": {},
"outputs": [],
"source": [
"def c_dist(word1,word2):\n",
" if words1 in embedding_matrix:\n",
" vec1 = embedding_matrix[word1]\n",
" else:\n",
" vec1 = None\n",
" \n",
" if words2 in embedding_matrix:\n",
" vec2 = embedding_matrix[word2]\n",
" else:\n",
" vec2 = None\n",
" \n",
" return vec1.dot(vec2)"
]
},
{
"cell_type": "code",
"execution_count": 73,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"16.628058119546"
]
},
"execution_count": 73,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"c_dist(\"apple\",\"orange\")"
]
},
{
"cell_type": "code",
"execution_count": 81,
"metadata": {},
"outputs": [],
"source": [
"words1, words2 = word_split(\"What is that?\"), word_split(\"Completely uncertain\")\n",
"l1, l2 = len(words1), len(words2)\n",
"\n",
"c_list = []\n",
"for i in range(l1):\n",
" row = []\n",
" for j in range(l2):\n",
" row.append(c_dist(words1[i],words2[j]))\n",
" c_list.append(row)"
]
},
{
"cell_type": "code",
"execution_count": 82,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[[17.865118885125003, 14.942913495031],\n",
" [16.718426428689998, 13.662786470655],\n",
" [18.5315726091344, 14.094461091497202]]"
]
},
"execution_count": 82,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"c_list"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": 89,
"metadata": {},
"outputs": [],
"source": [
"#词频,矢量 d_i, d_j\n",
"\n",
"def freq(sent):\n",
" freq = {}\n",
" l = len(sent)\n",
" for i in sent:\n",
" if i in freq:\n",
" freq[i] += 1\n",
" else:\n",
" freq[i] = 1\n",
" for i in freq:\n",
" freq[i] = freq[i]/l\n",
" \n",
" return freq"
]
},
{
"cell_type": "code",
"execution_count": 92,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'apple': 0.75, 'orange': 0.25}"
]
},
"execution_count": 92,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"freq([\"apple\",\"orange\",\"apple\",\"apple\"])"
]
},
{
"cell_type": "code",
"execution_count": 255,
"metadata": {},
"outputs": [],
"source": [
"# TODO: 编写WMD函数来计算两个句子之间的相似度\n",
"\n",
"def WMD (sent1, sent2):\n",
" \"\"\"\n",
" 这是主要的函数模块。参数sent1是第一个句子, 参数sent2是第二个句子,可以认为没有经过分词。\n",
" 在英文里,用空格作为分词符号。\n",
" \n",
" 在实现WMD算法的时候,需要用到LP Solver用来解决Transportation proboem. 请使用http://cvxopt.org/examples/tutorial/lp.html\n",
" 也可以参考blog: https://scaron.info/blog/linear-programming-in-python-with-cvxopt.html\n",
" \n",
" 需要做的事情为:\n",
" \n",
" 1. 对句子做分词: 调用 .split() 函数即可\n",
" 2. 获取每个单词的词向量。这需要读取文件之后构建embedding matrix. \n",
" 3. 构建lp问题,并用solver解决\n",
" \n",
" 可以自行定义其他的函数,但务必不要改写WMD函数名。测试时保证WMD函数能够正确运行。\n",
" \"\"\"\n",
" import numpy as np\n",
" from numpy import zeros, hstack, vstack, ones, identity\n",
" from cvxopt import matrix, solvers, spmatrix\n",
" \n",
" words1, words2 = word_split(sent1), word_split(sent2)\n",
" l1, l2 = len(words1), len(words2)\n",
" \n",
" c_list = []\n",
" for i in range(l1):\n",
" row = []\n",
" for j in range(l2):\n",
" row.append(c_dist(words1[i],words2[j]))\n",
" \n",
" c_list.append(row)\n",
" \n",
" c = []\n",
" for i in c_list:\n",
" c = c + i\n",
" \n",
" \n",
" \n",
" \n",
" G = -1 * identity(l1 * l2)\n",
" \n",
" h = np.zeros((l1 * l2, 1))\n",
" \n",
" #矩阵A\n",
" \n",
" #Initialize A\n",
" \n",
" A = np.zeros((l1+l2, l1*l2))\n",
" \n",
" for i in range(0,l1):\n",
" for j in range(0,l2):\n",
" \n",
" A[i, l2 * i + j] = 1\n",
" \n",
" A[l1 + j, j + i * l2] = 1\n",
" \n",
" G = np.append(G,[A[-1]], axis=0)\n",
" G = np.append(G,[-1*A[-1]], axis=0)\n",
" \n",
" sent1_freq = freq(words1)\n",
" sent2_freq = freq(words2)\n",
" \n",
" b = []\n",
" for i in words1:\n",
" b.append(sent1_freq[i])\n",
" for j in words2:\n",
" b.append(sent2_freq[j])\n",
" # for k in range(l1+l2,l1*l2):\n",
" # b.append(0)\n",
" #print(len(b))\n",
" h = np.append(h,[[b[-1]],[b[-1]]], axis=0)\n",
" \n",
" \n",
" \n",
" c = matrix(c)\n",
" G = matrix(G)\n",
" h = matrix(h)\n",
" A = matrix(A[:-1])\n",
" b = matrix(b[:-1])\n",
" \n",
" #solvers.options['show_progress'] = False\n",
" solution = solvers.lp(c,G,h,A=A,b=b) \n",
" solution['x']\n",
" result = 0\n",
" for i in range(l1 * l2):\n",
" result += solution['x'][i] * c[i]\n",
" \n",
" \n",
" \n",
"\n",
" \n",
" \n",
" \n",
" #return wmd_dist\n",
" return result"
]
},
{
"cell_type": "code",
"execution_count": 256,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"13.644383982310583"
]
},
"execution_count": 256,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"sent1 = \"This is ture.\"\n",
"sent2 = \"I believe this.\"\n",
"\n",
"WMD(sent1,sent2)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"## TODO: 自己写至少4个Test cases来测试一下。 比如 print (WMD(\"people like this car\", \"those guys enjoy driving that\"))\n",
"## "
]
},
{
"cell_type": "code",
"execution_count": 259,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"20.89196132529693"
]
},
"execution_count": 259,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# case 1\n",
"WMD(\"This is interesting.\",\"I will spend more time on it.\")"
]
},
{
"cell_type": "code",
"execution_count": 261,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"18.8263215757496"
]
},
"execution_count": 261,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# case 2\n",
"WMD(\"This is interesting.\",\"It is awful.\")\n",
"\n",
"#意思相反"
]
},
{
"cell_type": "code",
"execution_count": 268,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"9.739337259889615"
]
},
"execution_count": 268,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"#case 3\n",
"WMD(\"the Information paradox is intriguing\",\"Quantum effects may play a role.\")"
]
},
{
"cell_type": "code",
"execution_count": 265,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"12.097319274333442"
]
},
"execution_count": 265,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"#case 4\n",
"WMD(\"this sampling procedure yields similar results\",\"the new methods make no difference.\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 2. WMD算法的拓展\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"##### 2.1 从欧式距离到Mahalanobis距离\n",
"WMD算法本身不需要任何的标注好的数据,所以它属于无监督学习。 而且在上述的WMD算法里使用的是欧式距离,$c(i,j)=||x_i-x_j||_2$, 那这种距离有什么缺点呢? 其中一个缺点是欧式距离的计算把一个空间里的每一个维度都看成了同样的权重,也就是每一个维度的重要性都是一致的,而且不同维度之间的相关性也没有考虑进来。如果想把这些信息考虑进来,我们则可以使用一个改进版的距离计算叫做Mahalanobis Distance, 距离计算变成 $c(i,j)=(x_i-x_j)^{\\top}M(x_i-x_j)$。\n",
"\n",
"这如何去理解呢? Mahalanobis distance可以理解成: 首先我们对原始空间里的样本做了一层线性的转换, 然后在转换后的空间里计算欧式距离。 我们把这个过程写一下: 原始空间里的点为 $x_i$, 然后我们定义一个转换矩阵 $L$, 这时候就可以得到 $||Lx_i - Lx_j||_2^2=||L(x_i-x_j)||_2^2=(L(x_i-x_j))^{\\top}L(x_i-x_j)=(x_i-x_j)^{\\top}L^{\\top}L(x_i-x_j)=(x_i-x_j)^{\\top}M(x_i-x_j)$, 相当于把$L^{\\top}L$看做是矩阵$M$。这时候很容易看出来矩阵$M$是PSD(positive semidefinite). \n",
"\n",
"假设我们定义了这种距离,这里的M如何选择呢? 当然,这是需要学出来的! 那为了学出M, 必须要有标注好的训练数据,也就需要监督学习场景! "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"##### 2.2 从无监督学习到监督学习\n",
"\n",
"假如拥有数据集$D={(s_1, y_1),...,(s_n, y_n)}$, 这里每一个$s_i$代表的是一个句子, $y_i$代表的是对应每一个句子的标签(label)。 我们希望使用这个数据来学出M的值。那如何学习呢? 在这个问题上能使用的方法其实比较多,但在这里, 我们采用一个margin-based方法,这一点在SVM里面其实接触过。\n",
"\n",
"具体一点,假如我们手里有三个句子,$s_u, s_v, s_w$, 其中$s_u$和$s_v$是属于同一个类别,$s_w$是属于另一个类别,那这时候从KNN的角度来讲,我们希望$s_u, s_v$的距离要小于 $s_u, s_w$之间的距离。 用数学来表示: $d(s_u, s_v) < d(s_u, s_w)$, $~~d(.,.)$表示两个文本之间的距离。 其实我们希望它们之间的距离越大越好,也就是所谓的完全区间越宽越好。 但实际上,这个距离太大也没有什么意义,所以我们就干脆指定一个参数 $\\eta$来表示margin, 也就是只要它俩之间的距离大于这个margin就可以。如果小于margin就给他们一些惩罚(penalty),这一点跟SVM极其相似(slack variable)。所以从这个角度SVM也叫做margin-based classifier. \n",
"\n",
"把上述的表示成数学的话: $d(s_u, s_v) + \\eta < d(s_u, s_k)$, 但如果这个式子不成立的话就可以认为产生了penalty。 所以这部分就可以表示成大家熟悉的hinge loss: $max (0, d(s_u, s_v) + \\eta - d(s_u, s_k))$。 另外,我们同时也希望如果两个样本属于同一个类别, 那它俩的距离也比较相近。所以目标函数可以分为两个部分: 1. 同类型的样本距离尽量要近 2. 不同类型的样本距离尽量远一些。 \n",
"\n",
"当我们把所有的样本以及他们之间的大小关系考虑进来之后就可以得到最终的目标函数。 \n",
"\n",
"\\begin{equation}\n",
"L = \\lambda \\sum_{u=1}^{n}\\sum_{v\\in pos(u)}d(s_u, s_v) + (1-\\lambda)\\sum_{u=1}^{n}\\sum_{v\\in pos(u)}^{}\\sum_{w\\in neg(u)}^{} max (0, d(s_u, s_v) + \\eta - d(s_u, s_w))\n",
"\\end{equation}\n",
"\n",
"这里几个notation: pos(u)代表的是跟样本u属于同一个类别的样本, neg(u)指的是跟样本u属于不同类别的样本。 注意:类别的个数可以大于2, 就是多分类问题。 你也可以参考: http://jmlr.org/papers/volume10/weinberger09a/weinberger09a.pdf"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"在这个式子里,第一部分代表的是让同一类型的样本的距离变小, 第二部分代表的是不同类型的样本之间要扩大距离。 \n",
"\n",
"- #### Q1: 这里$\\lambda$起到什么作用?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A1: \n",
"\n",
"$\\lambda$ 起到调节目标函数两部分权重的作用。\n",
"\n",
"上面所列文献指出:\"Generally, the parameter μ(对应此处$\\lambda$) can be tuned via cross validation, though in our experience, the resultsfrom minimizing the loss function in Eq.(13) did not depend sensitively on the value of μ. Inpractice, the valueμ=0.5 worked well.\"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"- #### Q2: 在目标函数里有$\\eta$值,这个值怎么理解? 如果去设定这个值呢?\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"// TODO: 你的答案 ....\n",
" \n",
"假设有三个句子矢量$s_{u}$, $s_{v}$ 和 $s_{w}$, 为了令矢量之间的距离足够大,引入参数 $\\eta$ 将矢量间距离控制在 \n",
"$\\eta$ 以上,否则将引入penalty。\n",
" \n",
"添加约束 $\\eta\\geq 0$, 求解优化问题。\n",
" \n",
" \n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"这里的$d_{u,v}$指的是$s_u$和$s_v$之间的距离, 而且这个距离被定义为:\n",
"\n",
"\\begin{equation} d_{u, v}=min_{T\\geq 0}\\sum_{i,j}^{}T_{ij}c(i,j)^u~~~~ s.t. \\sum_{j=1}^{}T_{ij}=d_i^u, ~~\\sum_{i=1}^{}T_{ij}=d_j'^v\\end{equation}\n",
"\n",
"这里 $c(i,j)=(x_i-x_j)^{\\top}M(x_i-x_j)$。 所以是不是可以察觉到这个问题目标函数里既包含了参数$M$也包含了线性规划问题。\n",
"\n",
"- #### Q3: 请试着去理解上述所有的过程,并回答: 优化问题如何解决呢? 请给出解题的思路 (文字适当配合推导过程)。 "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"// TODO 你的答案.... \n",
"\n",
"优化目标函数获得M\n",
"\\begin{equation}\n",
"L = \\lambda \\sum_{u=1}^{n}\\sum_{v\\in pos(u)}d(s_u, s_v) + (1-\\lambda)\\sum_{u=1}^{n}\\sum_{v\\in pos(u)}^{}\\sum_{w\\in neg(u)}^{} max (0, d(s_u, s_v) + \\eta - d(s_u, s_w))\n",
"\\end{equation}\n",
"其中$\\lambda$作为超参数。\n",
"在计算中将该目标函数中的$max (0, d(s_u, s_v) + \\eta - d(s_u, s_w))$改为$-(d(s_u, s_v) + \\eta - d(s_u, s_w))$得到\n",
"最终实际计算中使用的目标函数,我们希望目标函数中的第二部分总是非负的。\n",
"\n",
"约束为:$M_{ij}\\geq0$\n",
"\n",
"$\\eta\\geq 0$\n",
"\n",
"$d(s_u, s_v) + \\eta - d(s_u, s_w) \\leq 0$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"对于上述问题,其实我们也可以采用不一样的损失函数来求解M。 一个常用的损失函数叫作 “kNN-LOO error”, 相当于把KNN的准确率转换成了smooth differential loss function. 感兴趣的朋友可以参考: https://papers.nips.cc/paper/6139-supervised-word-movers-distance.pdf\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"以上是优化部分的一个简短的作业,通过这些练习会对优化理论有更清晰的认知。 Good luck for everyone! "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment