Commit 6549f115 by 20200519047

Upload New File

parent 35e97cb2
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"collapsed": true
},
"source": [
"## 搭建一个简单的问答系统 (Building a Simple QA System)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"本次项目的目标是搭建一个基于检索式的简易的问答系统,这是一个最经典的方法也是最有效的方法。 \n",
"\n",
"```不要单独创建一个文件,所有的都在这里面编写,不要试图改已经有的函数名字 (但可以根据需求自己定义新的函数)```\n",
"\n",
"```预估完成时间```: 5-10小时"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 检索式的问答系统\n",
"问答系统所需要的数据已经提供,对于每一个问题都可以找得到相应的答案,所以可以理解为每一个样本数据是 ``<问题、答案>``。 那系统的核心是当用户输入一个问题的时候,首先要找到跟这个问题最相近的已经存储在库里的问题,然后直接返回相应的答案即可(但实际上也可以抽取其中的实体或者关键词)。 举一个简单的例子:\n",
"\n",
"假设我们的库里面已有存在以下几个<问题,答案>:\n",
"- <\"贪心学院主要做什么方面的业务?”, “他们主要做人工智能方面的教育”>\n",
"- <“国内有哪些做人工智能教育的公司?”, “贪心学院”>\n",
"- <\"人工智能和机器学习的关系什么?\", \"其实机器学习是人工智能的一个范畴,很多人工智能的应用要基于机器学习的技术\">\n",
"- <\"人工智能最核心的语言是什么?\", ”Python“>\n",
"- .....\n",
"\n",
"假设一个用户往系统中输入了问题 “贪心学院是做什么的?”, 那这时候系统先去匹配最相近的“已经存在库里的”问题。 那在这里很显然是 “贪心学院是做什么的”和“贪心学院主要做什么方面的业务?”是最相近的。 所以当我们定位到这个问题之后,直接返回它的答案 “他们主要做人工智能方面的教育”就可以了。 所以这里的核心问题可以归结为计算两个问句(query)之间的相似度。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 项目中涉及到的任务描述\n",
"问答系统看似简单,但其中涉及到的内容比较多。 在这里先做一个简单的解释,总体来讲,我们即将要搭建的模块包括:\n",
"\n",
"- 文本的读取: 需要从相应的文件里读取```(问题,答案)```\n",
"- 文本预处理: 清洗文本很重要,需要涉及到```停用词过滤```等工作\n",
"- 文本的表示: 如果表示一个句子是非常核心的问题,这里会涉及到```tf-idf```, ```Glove```以及```BERT Embedding```\n",
"- 文本相似度匹配: 在基于检索式系统中一个核心的部分是计算文本之间的```相似度```,从而选择相似度最高的问题然后返回这些问题的答案\n",
"- 倒排表: 为了加速搜索速度,我们需要设计```倒排表```来存储每一个词与出现的文本\n",
"- 词义匹配:直接使用倒排表会忽略到一些意思上相近但不完全一样的单词,我们需要做这部分的处理。我们需要提前构建好```相似的单词```然后搜索阶段使用\n",
"- 拼写纠错:我们不能保证用户输入的准确,所以第一步需要做用户输入检查,如果发现用户拼错了,我们需要及时在后台改正,然后按照修改后的在库里面搜索\n",
"- 文档的排序: 最后返回结果的排序根据文档之间```余弦相似度```有关,同时也跟倒排表中匹配的单词有关\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 项目中需要的数据:\n",
"1. ```dev-v2.0.json```: 这个数据包含了问题和答案的pair, 但是以JSON格式存在,需要编写parser来提取出里面的问题和答案。 \n",
"2. ```glove.6B```: 这个文件需要从网上下载,下载地址为:https://nlp.stanford.edu/projects/glove/, 请使用d=200的词向量\n",
"3. ```spell-errors.txt``` 这个文件主要用来编写拼写纠错模块。 文件中第一列为正确的单词,之后列出来的单词都是常见的错误写法。 但这里需要注意的一点是我们没有给出他们之间的概率,也就是p(错误|正确),所以我们可以认为每一种类型的错误都是```同等概率```\n",
"4. ```vocab.txt``` 这里列了几万个英文常见的单词,可以用这个词库来验证是否有些单词被拼错\n",
"5. ```testdata.txt``` 这里搜集了一些测试数据,可以用来测试自己的spell corrector。这个文件只是用来测试自己的程序。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"在本次项目中,你将会用到以下几个工具:\n",
"- ```sklearn```。具体安装请见:http://scikit-learn.org/stable/install.html sklearn包含了各类机器学习算法和数据处理工具,包括本项目需要使用的词袋模型,均可以在sklearn工具包中找得到。 \n",
"- ```jieba```,用来做分词。具体使用方法请见 https://github.com/fxsjy/jieba\n",
"- ```bert embedding```: https://github.com/imgarylai/bert-embedding\n",
"- ```nltk```:https://www.nltk.org/index.html"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 第一部分:对于训练数据的处理:读取文件和预处理"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"- ```文本的读取```: 需要从文本中读取数据,此处需要读取的文件是```dev-v2.0.json```,并把读取的文件存入一个列表里(list)\n",
"- ```文本预处理```: 对于问题本身需要做一些停用词过滤等文本方面的处理\n",
"- ```可视化分析```: 对于给定的样本数据,做一些可视化分析来更好地理解数据"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 1.1节: 文本的读取\n",
"把给定的文本数据读入到```qlist```和```alist```当中,这两个分别是列表,其中```qlist```是问题的列表,```alist```是对应的答案列表"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"IOPub data rate exceeded.\n",
"The notebook server will temporarily stop sending output\n",
"to the client in order to avoid crashing it.\n",
"To change this limit, set the config variable\n",
"`--NotebookApp.iopub_data_rate_limit`.\n",
"\n",
"Current values:\n",
"NotebookApp.iopub_data_rate_limit=1000000.0 (bytes/sec)\n",
"NotebookApp.rate_limit_window=3.0 (secs)\n",
"\n"
]
}
],
"source": [
"import json\n",
"def read_corpus():\n",
" \"\"\"\n",
" 读取给定的语料库,并把问题列表和答案列表分别写入到 qlist, alist 里面。 在此过程中,不用对字符换做任何的处理(这部分需要在 Part 2.3里处理)\n",
" qlist = [\"问题1\", “问题2”, “问题3” ....]\n",
" alist = [\"答案1\", \"答案2\", \"答案3\" ....]\n",
" 务必要让每一个问题和答案对应起来(下标位置一致)\n",
" \"\"\"\n",
" # TODO 需要完成的代码部分 ...\n",
" qlist = []\n",
" alist = []\n",
" filename ='train-v2.0.json'\n",
" datas = json.load(open(filename,'r'))\n",
" data = datas['data']\n",
" for d in data:\n",
" paragraph = d['paragraphs']\n",
" for p in paragraph:\n",
" qas = p['qas']\n",
" for qa in qas:\n",
" if(not qa['is_impossible']):\n",
" qlist.append(qa['question'])\n",
" alist.append(qa['answers'][0]['text']) \n",
" \n",
" assert len(qlist) == len(alist) # 确保长度一样\n",
" return qlist, alist\n",
"qlist, alist = read_corpus()\n",
"print(qlist)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 1.2 理解数据(可视化分析/统计信息)\n",
"对数据的理解是任何AI工作的第一步, 需要对数据有个比较直观的认识。在这里,简单地统计一下:\n",
"\n",
"- 在```qlist```出现的总单词个数\n",
"- 按照词频画一个```histogram``` plot"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"57807\n"
]
}
],
"source": [
"# TODO: 统计一下在qlist中总共出现了多少个单词? 总共出现了多少个不同的单词(unique word)?\n",
"# 这里需要做简单的分词,对于英文我们根据空格来分词即可,其他过滤暂不考虑(只需分词)\n",
"words_qlist = dict()\n",
"for q in qlist:\n",
" words = q.strip().split(' ')\n",
" for w in words:\n",
" if w.lower() in words_qlist:\n",
" words_qlist[w.lower()] += 1\n",
" else:\n",
" words_qlist[w.lower()] = 1\n",
"word_total = len(words_qlist)\n",
"print (word_total)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[<matplotlib.axis.YTick at 0x12f1fd048>,\n",
" <matplotlib.axis.YTick at 0x12f1fb940>,\n",
" <matplotlib.axis.YTick at 0x10eb700b8>,\n",
" <matplotlib.axis.YTick at 0x12f2f8ac8>,\n",
" <matplotlib.axis.YTick at 0x12f2f8e80>,\n",
" <matplotlib.axis.YTick at 0x12fd654a8>,\n",
" <matplotlib.axis.YTick at 0x12fd65978>,\n",
" <matplotlib.axis.YTick at 0x12f2f8a90>,\n",
" <matplotlib.axis.YTick at 0x12fd65dd8>,\n",
" <matplotlib.axis.YTick at 0x12fd6b2e8>,\n",
" <matplotlib.axis.YTick at 0x12fd6b7f0>]"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
},
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAYAAAAD8CAYAAAB+UHOxAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4zLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvnQurowAAE3JJREFUeJzt3XuwXWV5x/HvQxJIw0WSHGQIwUkQvIByN6UgcrHVSBljFR2qYAhIpha51Q6CzFQ60gGBYpRaHdQgVAYqGjHTQZRrHRTQJFySEBBE1BOuSbhZJpCYp3/sdcgx5CQn+5K1d97vZ4Y566y19z7velnZv1mX930iM5EklWeruhsgSaqHASBJhTIAJKlQBoAkFcoAkKRCGQCSVCgDQJIKZQBIUqEMAEkq1Mi6GwDQ19eXkyZNqrsZktRT5s+fvywzd2r2/V0RAJMmTWLevHl1N0OSekpE/K6V93sJSJIKZQBIUqEMAEkqVFfcA5CkwVatWkV/fz8rV66suyldYfTo0UycOJFRo0a19XMNAEldp7+/n+23355JkyYREXU3p1aZyfLly+nv72fy5Mlt/WwvAUnqOitXrmT8+PHFf/kDRATjx4/vyNmQASCpK/nlv1an+sIAkKRCeQ9AUte7+KaHeO7lVW37vLFjRnH21Le17fM21axZs5g5cyZjxoyprQ3QJQHw1IsrOXfOwrqbUftBIWn9nnt5FRd++J1t+7y6v29mzZrF8ccfbwAA/GlNtvV/brPqPigkdY+rr76aSy+9lIhgn3324Ytf/CInnXQSy5YtY6edduLKK6/kTW96EyeeeCLHHHMMxx57LADbbbcdf/zjH7njjjs4//zz6evrY9GiRRx44IF897vf5fLLL+eJJ57gyCOPpK+vj1tuuYWTTz6ZefPmERGcdNJJnHXWWZtlH7siACSpmyxevJgLLriAX/ziF/T19bFixQqmT5/+2n+zZ8/m9NNP54Ybbtjg59x7770sXryYCRMmcOihh/Lzn/+c008/ncsuu4zbb7+dvr4+5s+fz9KlS1m0aBEAzz///ObYRcCbwJL0Orfddhsf/ehH6evrA2DcuHHcddddfPzjHwfghBNO4M4779zo50yZMoWJEyey1VZbsd9++/H444+/7jW77747jz32GKeddho33XQTO+ywQ1v3ZUMMAElqwciRI1mzZg0Aa9as4dVXX31t2zbbbPPa8ogRI1i9evXr3j927Fjuv/9+jjjiCL7xjW/wqU99qvONrhgAkrSOo446iuuvv57ly5cDsGLFCg455BCuu+46AK655hoOO+wwoDGd/fz58wGYO3cuq1Zt/Gml7bffnpdeegmAZcuWsWbNGj7ykY9wwQUXsGDBgk7s0np5D0BS1xs7ZlRbH9IYO2bDc+rsvffenHfeeRx++OGMGDGC/fffn8svv5wZM2ZwySWXvHYTGOCUU05h2rRp7LvvvkydOpVtt912o39/5syZTJ06lQkTJjBr1ixmzJjx2lnEhRde2PoODlNk5mb7Y0PZZY+988lHF9fdDM6ds7ArnkaSSrdkyRLe/va3192MrrK+PomI+Zl5ULOf6SUgSSqUASBJhTIAJHWlbrg83S061RcGgKSuM3r0aJYvX24IsLYewOjRo9v+2T4FJKnrTJw4kf7+fp599tm6m9IVBiqCtZsBIKnrjBo1qu3Vr/R6XgKSpEIZAJJUKANAkgplAEhSoQwASSqUASBJhTIAJKlQBoAkFcoAkKRCGQCSVCgDQJIKZQBIUqEMAEkqlAEgSYUyACSpUAaAJBXKAJCkQhkAklQoA0CSCmUASFKhDABJKpQBIEmFMgAkqVAGgCQVygCQpEIZAJJUKANAkgplAEhSoQwASSqUASBJhTIAJKlQBoAkFcoAkKRCGQCSVCgDQJIKZQBIUqEMAEkqlAEgSYUyACSpUAaAJBXKAJCkQhkAklQoA0CSCmUASFKhDABJKpQBIEmFMgAkqVAGgCQVygCQpEIZAJJUKANAkgplAEhSoQwASSqUASBJhTIAJKlQBoAkFcoAkKRCGQCSVCgDQJIKZQBIUqEMAEkqlAEgSYUyACSpUAaAJBXKAJCkQhkAklQoA0CSCmUASFKhDABJKpQBIEmFMgAkqVAGgCQVygCQpEIZAJJUKANAkgplAEhSoQwASSqUASBJhTIAJKlQBoAkFcoAkKRCGQCSVCgDQJIKZQBIUqEMAEkqlAEgSYXaaABExG4RcXtEPBgRiyPijGr9uIi4OSIeqX6OrdZHRHw1Ih6NiAci4oBO74QkadMN5wxgNfDZzNwLOBg4NSL2As4Bbs3MPYFbq98BPgDsWf03E/h621stSWrZRgMgM5/MzAXV8kvAEmBXYBpwVfWyq4APVcvTgKuz4W5gx4jYpe0tlyS1ZJPuAUTEJGB/4B5g58x8str0FLBztbwr8IdBb+uv1q37WTMjYl5EzHv5hec2sdmSpFYNOwAiYjvgB8CZmfni4G2ZmUBuyh/OzCsy86DMPGjMG8ZuylslSW0wrACIiFE0vvyvycw51eqnBy7tVD+fqdYvBXYb9PaJ1TpJUhcZzlNAAXwbWJKZlw3aNBeYXi1PB340aP0nq6eBDgZeGHSpSJLUJUYO4zWHAicACyPivmrd54GLgO9FxMnA74CPVdtuBI4GHgVeBma0tcWSpLbYaABk5p1ADLH5vet5fQKnttguSVKHORJYkgo1nHsAsyPimYhYNGjdfhFxd0TcVz3KOaVa7yhgSeoRwzkD+A4wdZ11FwP/mpn7Af9S/Q6OApaknjGckcA/A1asuxrYoVp+A/BEtewoYEnqEcN5Cmh9zgR+EhGX0giRQ6r1Q40C9jFQSeoyzd4E/jRwVmbuBpxFY5zAJnEqCEmqV7MBMB0YGBF8PTClWh72KGCngpCkejUbAE8Ah1fLRwGPVMuOApakHrHRewARcS1wBNAXEf3AF4BTgK9ExEhgJY0nfsBRwJLUM4YzEvjvh9h04Hpe6yhgSeoRjgSWpEI1NRK4Wn9aRDxU1Qm+eND6c6uRwA9HxPs70WhJUuuGMw7gO8B/AFcPrIiII2kM+to3M1+JiDdW6/cCjgP2BiYAt0TEWzLzT+1uuCSpNc2OBP40cFFmvlK9ZqAYzDTgusx8JTN/S+Nm8BQkSV2n2XsAbwEOi4h7IuJ/I+Jd1fph1QOWJNWv2akgRgLjgIOBd9EoDLP7pnxARMykenx0hz6nC5Kkza3ZM4B+YE416dsvgTVAH44ElqSe0WwA3AAcCRARbwG2BpbRGAl8XERsExGTaUwL/ct2NFSS1F7NjgSeDcyuHg19FZheDQJbHBHfAx4EVgOn+gSQJHWnVkYCHz/E6/8N+LdWGiVJ6jxHAktSoQwASSpU01NBVNs+GxEZEX3V7xaFl6Qe0WxReCJiN+B9wO8HrbYovCT1iGanggD4MnA2jQLxAywKL0k9oql7ABExDViamfevs2nYU0FYE1iS6rXJU0FExBjg8zQu/zQtM68ArgDYZY+9cyMvlyS1WTNzAb0ZmAzcHxHQmO5hQURMYROmgpAk1WuTLwFl5sLMfGNmTsrMSTQu8xyQmU9hUXhJ6hnDeQz0WuAu4K0R0R8RJ2/g5TcCj9GoA/BN4B/b0kpJUtu1MhXEwPZJg5YtCi9JPcKRwJJUqKZGAkfEJVVB+Aci4ocRseOgbRaFl6Qe0OxI4JuBd2TmPsCvgXPhdUXhpwL/GREj2tZaSVLbNDUSODN/mpmrq1/vpvG4J1gUXpJ6RjvuAZwE/Lhatii8JPWIlgIgIs6jUfnrmibe61QQklSjpgMgIk4EjgE+UT3+CRaFl6Se0exkcFNpzAT6wcx8edAmi8JLUo9otij8ucA2wM3VfEB3Z+Y/ZKZF4SWpRzQ7EvjbG3i9ReElqQc4EliSCtXsSOBxEXFzRDxS/RxbrbcmsCT1iGZHAp8D3JqZewK3Vr+DNYElqWc0WxN4GnBVtXwV8KFB660JLEk9oNl7ADsPKvTyFLBztexIYEnqES3fBK4GgW1yTV9HAktSvZoNgKcHLu1UP5+p1jsSWJJ6RLMBMBeYXi1PB340aL01gSWpBzQ7Evgi4HtVfeDfAR+rXn4jcDSNaaBfBmZ0oM2SpDZopSbwe9fzWmsCS1KPcCSwJBXKAJCkQrVaEOasiFgcEYsi4tqIGB0RkyPinmo6iP+OiK3b1VhJUvu0UhBmV+B04KDMfAcwgkZB+C8BX87MPYDngJPb0VBJUnu1egloJPAXETESGAM8CRwFfL/aPniaCElSF2k6ADJzKXAp8HsaX/wvAPOB5zNzdfWyIaeCcCSwJNWrlUtAY2lM/jYZmABsy+tnDR2SI4ElqV6tXAL6a+C3mflsZq4C5gCH0pgBdGB8wZBTQUiS6tVKAPweODgixkSjMPB7adQCvh04tnrN4GkiJEldpJV7APfQuNm7AFhYfdYVwOeAf4qIR4HxbKB+sCSpPhudCmJDMvMLNOYGGuwxYEornytJ6jxHAktSoVodCbxjRHw/Ih6KiCUR8VdDFYyXJHWXVs8AvgLclJlvA/YFljB0wXhJUhdpZRzAG4D3UN3kzcxXM/N5hi4YL0nqIq2cAUwGngWujIh7I+JbEbEtQxeMlyR1kVYCYCRwAPD1zNwf+D/WudyzoYLxTgUhSfVqJQD6gf5qPAA0xgQcwNAF4/+MU0FIUr1aGQj2FPCHiHhrtWpgJPBQBeMlSV2kpYFgwGnANVXRl8doFIHfivUXjJckdZFWRwLfBxy0nk2vKxgvSeoujgSWpEK1HAARMaJ6DPR/qt+tCSxJPaAdZwBn0BgBPMCawJLUA1qdC2gi8LfAt6rfA2sCS1JPaPUMYBZwNrCm+n08w6wJLEmqVytzAR0DPJOZ85t8vyOBJalGrTwGeijwwYg4GhgN7EBjdtAdI2JkdRYwZE3gzLyCRgUxdtlj7/VOFyFJ6pxWRgKfm5kTM3MScBxwW2Z+AmsCS1JP6MQ4AGsCS1IPaHUqCAAy8w7gjmrZmsCS1AMcCSxJhTIAJKlQrTwGultE3B4RD0bE4og4o1pvUXhJ6gGtnAGsBj6bmXsBBwOnRsReWBReknpCK4+BPpmZC6rll2jMB7QrFoWXpJ7QlqeAImISsD9wD8MsCh8RM4GZADv07dKOZrRs7JhRnDtnYe1tOHvq22ptg6QytBwAEbEd8APgzMx8sTEfXENmZkSsd5RvN44E7oYv3roDSFI5Wp0NdBSNL/9rMnNOtXpYReElSfVq5SmgoDHKd0lmXjZok0XhJakHtDoZ3AnAwoi4r1r3eeAiLAovSV2v6QDIzDuBGGKzReElqcu15SkgtU83PIk00I5uuCkuqXM6FgARMZVGfYARwLcy86JO/a0tSbd86XZDCEnqrI4EQESMAL4G/A2NspC/ioi5mflgJ/6etkwX3/QQz728qu5meDakLVanzgCmAI9WU0MTEdfRGCFsAPSIbrgUNXbMKC788DtrbQM0gqgb+sIQUrt1KgB2Bf4w6Pd+4C879LfUAX7ZrNUNfdENIaQ/tyWEcm03gQdPBQG8EhGL6mpLl+kDltXdiC5hX6xlX6zVNX3xubobAG9t5c2dCoClwG6Dfn9dcfjBU0FExLzMPKhDbekp9sVa9sVa9sVa9sVaETGvlfd3qiDMr4A9I2JyRGxNo2j83A79LUlSEzpyBpCZqyPiM8BPaDwGOjszF3fib0mSmtOxewCZeSNw4zBffkWn2tGD7Iu17Iu17Iu17Iu1WuqLyOyKmZglSZuZReElqVC1B0BETI2IhyPi0Ygorn5wRDweEQsj4r6BO/oRMS4ibo6IR6qfY+tuZydExOyIeGbwI8BD7Xs0fLU6Th6IiAPqa3n7DdEX50fE0urYuC8ijh607dyqLx6OiPfX0+r2i4jdIuL2iHgwIhZHxBnV+uKOiw30RfuOi8ys7T8aN4h/A+wObA3cD+xVZ5tq6IPHgb511l0MnFMtnwN8qe52dmjf3wMcACza2L4DRwM/pjED7cHAPXW3fzP0xfnAP6/ntXtV/1a2ASZX/4ZG1L0PbeqHXYADquXtgV9X+1vccbGBvmjbcVH3GcBrU0Zk5qvAwJQRpZsGXFUtXwV8qMa2dExm/gxYsc7qofZ9GnB1NtwN7DhQeW5LMERfDGUacF1mvpKZvwUepfFvqedl5pOZuaBafglYQmNmgeKOiw30xVA2+bioOwDWN2XEhnZwS5TATyNifjU6GmDnzHyyWn4K2LmeptViqH0v9Vj5THVpY/agS4FF9EVETAL2B+6h8ONinb6ANh0XdQeA4N2ZeQDwAeDUiHjP4I3ZOLcr8lGtkve98nXgzcB+wJPAv9fbnM0nIrajUW/8zMx8cfC20o6L9fRF246LugNgo1NGbOkyc2n18xnghzRO2Z4eOI2tfj5TXws3u6H2vbhjJTOfzsw/ZeYa4JusPZ3fovsiIkbR+MK7JjPnVKuLPC7W1xftPC7qDoCip4yIiG0jYvuBZeB9wCIafTC9etl04Ef1tLAWQ+37XOCT1VMfBwMvDLoksEVa51r239E4NqDRF8dFxDYRMRnYE/jl5m5fJ0REAN8GlmTmZYM2FXdcDNUXbT0uuuBO99E07m7/Bjiv7vZs5n3fncZd+/uBxQP7D4wHbgUeAW4BxtXd1g7t/7U0TmFX0bheefJQ+07jKY+vVcfJQuCgutu/Gfriv6p9faD6x73LoNefV/XFw8AH6m5/G/vh3TQu7zwA3Ff9d3SJx8UG+qJtx4UjgSWpUHVfApIk1cQAkKRCGQCSVCgDQJIKZQBIUqEMAEkqlAEgSYUyACSpUP8P3tbndTO1rKEAAAAASUVORK5CYII=\n",
"text/plain": [
"<Figure size 432x288 with 1 Axes>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"# TODO: 统计一下qlist中出现1次,2次,3次... 出现的单词个数, 然后画一个plot. 这里的x轴是单词出现的次数(1,2,3,..), y轴是单词个数。\n",
"# 从左到右分别是 出现1次的单词数,出现2次的单词数,出现3次的单词数... \n",
"import matplotlib.pyplot as plt\n",
"import numpy as np\n",
"counts = dict()\n",
"for w,c in words_qlist.items():\n",
" if c in counts:\n",
" counts[c] +=1\n",
" else:\n",
" counts[c] = 1\n",
"fig,ax = plt.subplots()\n",
"ax.hist(counts.values(),bins = np.arange(0,250,25),histtype='step',alpha=0.6,label=\"counts\")\n",
"ax.legend()\n",
"ax.set_xlim(0,250)\n",
"ax.set_yticks(np.arange(0,220,20))\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# TODO: 从上面的图中能观察到什么样的现象? 这样的一个图的形状跟一个非常著名的函数形状很类似,能所出此定理吗? \n",
"# hint: [XXX]'s law\n",
"# \n",
"# "
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"import nltk \n",
"from nltk.corpus import stopwords"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": true
},
"source": [
"#### 1.3 文本预处理\n",
"此部分需要做文本方面的处理。 以下是可以用到的一些方法:\n",
"\n",
"- 1. 停用词过滤 (去网上搜一下 \"english stop words list\",会出现很多包含停用词库的网页,或者直接使用NLTK自带的) \n",
"- 2. 转换成lower_case: 这是一个基本的操作 \n",
"- 3. 去掉一些无用的符号: 比如连续的感叹号!!!, 或者一些奇怪的单词。\n",
"- 4. 去掉出现频率很低的词:比如出现次数少于10,20.... (想一下如何选择阈值)\n",
"- 5. 对于数字的处理: 分词完只有有些单词可能就是数字比如44,415,把所有这些数字都看成是一个单词,这个新的单词我们可以定义为 \"#number\"\n",
"- 6. lemmazation: 在这里不要使用stemming, 因为stemming的结果有可能不是valid word。\n"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[nltk_data] Downloading package stopwords to\n",
"[nltk_data] /Users/fanhang/nltk_data...\n",
"[nltk_data] Package stopwords is already up-to-date!\n"
]
}
],
"source": [
"# TODO: 需要做文本方面的处理。 从上述几个常用的方法中选择合适的方法给qlist做预处理(不一定要按照上面的顺序,不一定要全部使用)\n",
"import nltk \n",
"nltk.download('stopwords')\n",
"from nltk.corpus import stopwords\n",
"import codecs\n",
"import re\n",
"\n",
"def tokenizer(ori_list):\n",
" SYMBOLS = re.compile('[\\s;\\\"\\\",.!?\\\\/\\[\\]\\{\\}\\(\\)-]+')\n",
" new_list = []\n",
" for q in ori_list:\n",
" words = SYMBOLS.split(q.lower().strip())\n",
" new_list.append(' '.join(words))\n",
" return new_list\n",
"\n",
"def removeStopWord(ori_list):\n",
" new_list = []\n",
" restored = ['what','when','which','how','who','where']\n",
" english_stop_words = list(set(stopwords.words('english'))) \n",
" for w in restored:\n",
" english_stop_words.remove(w)\n",
" for q in ori_list:\n",
" sentence = ' '.join([w for w in q.strip().split(' ') if w not in english_stop_words])\n",
" new_list.append(sentence)\n",
" return new_list\n",
"\n",
"def removeLowFrequence(ori_list,vocabulary,thres=10):\n",
" new_list = []\n",
" for q in ori_list:\n",
" sentence = ' '.join([w for w in q.strip().split(' ') if vocabulary[w]>=thres])\n",
" new_list.append(sentence)\n",
" return new_list\n",
" \n",
"def replaceDigits(ori_list,replace = '#number'):\n",
" DIGITS = re.compile('\\d+')\n",
" new_list = []\n",
" for q in ori_list:\n",
" q = DIGITS.sub(replace,q)\n",
" new_list.append(q)\n",
" return new_list\n",
"\n",
"def createVocab(ori_list):\n",
" count = 0\n",
" vocab_count = dict()\n",
" for q in ori_list:\n",
" words = q.strip().split(' ')\n",
" count +=len(words)\n",
" for w in words:\n",
" if w in vocab_count:\n",
" vocab_count[w] +=1\n",
" else:\n",
" vocab_count[w] = 1\n",
" return vocab_count, count\n",
" \n",
"def writeFile(oriList, filename):\n",
" with codecs.open(filename,'w','utf8') as Fout:\n",
" for q in oriList:\n",
" Fout.write(q+u'\\n')\n",
" \n",
"def writeVocab(vocabulary, filename):\n",
" sortedList = sorted(vocabulary.items(),key = lambda d:d[1])\n",
" with codecs.open(filename, 'w', 'utf8') as Fout:\n",
" for (w,c) in sortedList:\n",
" Fout.write(w+u':'+str(c)+u'\\n')\n",
" \n",
"new_list = tokenizer(qlist)\n",
"new_list = removeStopWord(new_list)\n",
"new_list = replaceDigits(new_list)\n",
"vocabulary, count = createVocab(new_list)\n",
"new_list = removeLowFrequence(new_list,vocabulary,5)\n",
"vocab_count,count = createVocab(new_list)\n",
"writeVocab(vocab_count,\"train.vocab\")\n",
"qlist = new_list # 更新后的问题列表"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'when': 6670,\n",
" 'beyonce': 243,\n",
" 'start': 434,\n",
" 'becoming': 68,\n",
" 'popular': 335,\n",
" 'what': 50267,\n",
" 'areas': 317,\n",
" 'compete': 40,\n",
" 'growing': 54,\n",
" 'leave': 123,\n",
" \"destiny's\": 35,\n",
" 'child': 221,\n",
" 'become': 713,\n",
" 'solo': 25,\n",
" 'singer': 54,\n",
" 'city': 1722,\n",
" 'state': 948,\n",
" 'grow': 74,\n",
" 'which': 6395,\n",
" 'decade': 154,\n",
" 'famous': 267,\n",
" 'r&b': 18,\n",
" 'group': 968,\n",
" 'lead': 203,\n",
" 'album': 240,\n",
" 'made': 858,\n",
" 'worldwide': 82,\n",
" 'known': 951,\n",
" 'artist': 129,\n",
" 'who': 9100,\n",
" 'managed': 26,\n",
" 'beyoncé': 170,\n",
" 'rise': 117,\n",
" 'fame': 30,\n",
" 'role': 206,\n",
" 'first': 2805,\n",
" 'released': 367,\n",
" 'release': 206,\n",
" 'dangerously': 6,\n",
" 'love': 52,\n",
" 'how': 9450,\n",
" 'many': 5547,\n",
" 'grammy': 28,\n",
" 'awards': 73,\n",
" 'win': 227,\n",
" \"beyoncé's\": 43,\n",
" 'name': 2783,\n",
" 'second': 510,\n",
" 'entertainment': 49,\n",
" 'venture': 10,\n",
" 'explore': 21,\n",
" 'marry': 56,\n",
" 'set': 226,\n",
" 'record': 216,\n",
" 'grammys': 6,\n",
" 'movie': 180,\n",
" 'receive': 241,\n",
" 'golden': 52,\n",
" 'globe': 21,\n",
" 'nomination': 9,\n",
" 'take': 666,\n",
" 'hiatus': 11,\n",
" 'career': 62,\n",
" 'control': 334,\n",
" 'management': 56,\n",
" 'darker': 16,\n",
" 'tone': 12,\n",
" 'previous': 87,\n",
" 'work': 634,\n",
" 'james': 102,\n",
" 'create': 228,\n",
" 'sasha': 10,\n",
" 'fierce': 9,\n",
" 'end': 520,\n",
" 'act': 381,\n",
" 'acting': 13,\n",
" 'job': 77,\n",
" '#number': 9749,\n",
" 'married': 60,\n",
" 'alter': 21,\n",
" 'ego': 5,\n",
" 'music': 662,\n",
" 'elements': 97,\n",
" 'time': 1108,\n",
" 'magazine': 120,\n",
" 'named': 274,\n",
" 'one': 1358,\n",
" 'people': 1597,\n",
" 'century': 934,\n",
" 'declared': 98,\n",
" 'dominant': 70,\n",
" 'woman': 66,\n",
" 'musician': 24,\n",
" 'recording': 71,\n",
" 'industry': 214,\n",
" 'association': 80,\n",
" 'america': 241,\n",
" 'recognize': 53,\n",
" 'top': 208,\n",
" 'certified': 8,\n",
" 'rated': 38,\n",
" 'powerful': 32,\n",
" 'female': 118,\n",
" 'describe': 226,\n",
" 'feminist': 12,\n",
" 'years': 585,\n",
" 'rate': 269,\n",
" 'influential': 53,\n",
" 'world': 863,\n",
" 'records': 174,\n",
" 'sold': 131,\n",
" 'year': 3407,\n",
" 'sell': 123,\n",
" 'part': 927,\n",
" 'leaving': 47,\n",
" \"beyonce's\": 62,\n",
" 'younger': 19,\n",
" 'also': 458,\n",
" 'sang': 16,\n",
" 'band': 116,\n",
" 'where': 3884,\n",
" 'get': 416,\n",
" 'race': 185,\n",
" 'father': 150,\n",
" 'childhood': 27,\n",
" 'home': 299,\n",
" 'believed': 189,\n",
" 'religion': 323,\n",
" 'worked': 82,\n",
" 'sales': 94,\n",
" 'manager': 55,\n",
" 'company': 631,\n",
" 'mother': 113,\n",
" 'sister': 45,\n",
" 'appeared': 55,\n",
" 'leader': 279,\n",
" 'descendant': 5,\n",
" 'raised': 57,\n",
" 'town': 146,\n",
" 'go': 259,\n",
" 'school': 626,\n",
" 'person': 262,\n",
" 'notice': 19,\n",
" 'singing': 23,\n",
" 'ability': 76,\n",
" 'moved': 104,\n",
" 'left': 144,\n",
" 'elementary': 12,\n",
" 'teachers': 19,\n",
" 'discovered': 181,\n",
" 'musical': 81,\n",
" 'talent': 7,\n",
" 'church': 517,\n",
" 'member': 148,\n",
" 'choir': 8,\n",
" 'type': 2133,\n",
" 'song': 278,\n",
" 'sing': 34,\n",
" 'competition': 115,\n",
" 'age': 327,\n",
" 'located': 1017,\n",
" 'dance': 68,\n",
" 'old': 326,\n",
" 'show': 309,\n",
" 'two': 1342,\n",
" 'decided': 76,\n",
" 'place': 586,\n",
" 'star': 75,\n",
" 'search': 27,\n",
" 'manage': 20,\n",
" 'girls': 31,\n",
" 'label': 73,\n",
" 'give': 226,\n",
" 'deal': 113,\n",
" 'brought': 162,\n",
" 'california': 35,\n",
" 'enter': 79,\n",
" 'quit': 9,\n",
" 'large': 423,\n",
" 'recorded': 113,\n",
" \"group's\": 13,\n",
" 'signed': 138,\n",
" 'later': 176,\n",
" 'cut': 48,\n",
" 'meet': 95,\n",
" 'met': 50,\n",
" 'placed': 73,\n",
" 'begin': 685,\n",
" 'girl': 11,\n",
" 'october': 68,\n",
" 'film': 312,\n",
" 'featured': 116,\n",
" \"child's\": 12,\n",
" 'major': 492,\n",
" 'single': 218,\n",
" 'award': 147,\n",
" 'best': 206,\n",
" 'performance': 75,\n",
" 'man': 125,\n",
" 'changed': 95,\n",
" 'based': 447,\n",
" 'quote': 12,\n",
" 'book': 407,\n",
" 'bible': 67,\n",
" 'debut': 47,\n",
" 'killing': 31,\n",
" 'sound': 128,\n",
" 'track': 34,\n",
" 'annual': 94,\n",
" 'included': 174,\n",
" \"film's\": 8,\n",
" 'soundtrack': 18,\n",
" 'hit': 107,\n",
" 'duet': 6,\n",
" 'mental': 21,\n",
" 'health': 138,\n",
" 'issue': 155,\n",
" 'event': 371,\n",
" 'occured': 16,\n",
" 'publicly': 17,\n",
" 'criticized': 70,\n",
" 'supported': 125,\n",
" 'depression': 31,\n",
" 'caused': 403,\n",
" 'long': 806,\n",
" 'helped': 200,\n",
" 'fight': 87,\n",
" 'replaced': 173,\n",
" 'blamed': 17,\n",
" 'overcome': 15,\n",
" 'following': 121,\n",
" 'split': 67,\n",
" 'newest': 12,\n",
" 'removed': 92,\n",
" 'members': 339,\n",
" 'weeks': 27,\n",
" 'independent': 101,\n",
" 'women': 193,\n",
" 'stay': 66,\n",
" 'network': 162,\n",
" 'land': 296,\n",
" 'third': 210,\n",
" 'survivor': 6,\n",
" 'week': 71,\n",
" 'french': 449,\n",
" 'composer': 20,\n",
" 'wrote': 345,\n",
" 'original': 273,\n",
" 'opera': 53,\n",
" '#numberth': 877,\n",
" 'lawsuit': 26,\n",
" 'filed': 33,\n",
" 'announce': 89,\n",
" 'austin': 7,\n",
" 'powers': 111,\n",
" 'three': 402,\n",
" 'countries': 497,\n",
" 'achieve': 65,\n",
" 'ten': 56,\n",
" 'status': 132,\n",
" 'starred': 34,\n",
" 'cuba': 11,\n",
" 'jr': 26,\n",
" 'fighting': 67,\n",
" 'better': 97,\n",
" 'charts': 27,\n",
" 'appear': 160,\n",
" 'mike': 6,\n",
" 'amount': 234,\n",
" 'gross': 31,\n",
" 'genre': 97,\n",
" 'critics': 47,\n",
" 'view': 172,\n",
" 'character': 113,\n",
" 'called': 1310,\n",
" 'comedy': 21,\n",
" 'along': 716,\n",
" 'highest': 266,\n",
" 'achieved': 42,\n",
" 'billboard': 27,\n",
" 'hot': 67,\n",
" 'since': 302,\n",
" 'number': 399,\n",
" 'five': 95,\n",
" 'singles': 21,\n",
" 'came': 197,\n",
" 'u': 382,\n",
" 'spot': 21,\n",
" 'chart': 31,\n",
" 'closer': 18,\n",
" 'associated': 185,\n",
" 'premiere': 22,\n",
" 'earn': 51,\n",
" 'final': 162,\n",
" 'got': 47,\n",
" 'hollywood': 22,\n",
" 'walk': 16,\n",
" 'embark': 6,\n",
" 'tour': 86,\n",
" 'europe': 288,\n",
" 'announced': 76,\n",
" 'would': 839,\n",
" 'european': 375,\n",
" 'started': 177,\n",
" 'november': 57,\n",
" 'perform': 114,\n",
" 'february': 39,\n",
" 'studio': 42,\n",
" 'albums': 46,\n",
" 'produce': 182,\n",
" 'birthday': 10,\n",
" \"b'day\": 5,\n",
" 'celebrate': 32,\n",
" 'high': 450,\n",
" 'climb': 6,\n",
" 'copies': 51,\n",
" 'collaborated': 8,\n",
" 'k': 17,\n",
" 'listen': 6,\n",
" 'much': 1254,\n",
" 'money': 246,\n",
" 'make': 589,\n",
" 'millions': 20,\n",
" 'dollars': 31,\n",
" 'pink': 5,\n",
" 'wide': 39,\n",
" 'call': 268,\n",
" 'concert': 48,\n",
" 'steve': 13,\n",
" 'martin': 31,\n",
" 'pop': 62,\n",
" 'international': 303,\n",
" 'beautiful': 19,\n",
" 'songs': 89,\n",
" 'beat': 39,\n",
" 'video': 160,\n",
" 'grossed': 6,\n",
" 'reveal': 28,\n",
" 'marriage': 65,\n",
" 'mtv': 26,\n",
" 'prominent': 83,\n",
" 'felt': 38,\n",
" 'went': 65,\n",
" 'instead': 145,\n",
" 'taylor': 27,\n",
" 'swift': 8,\n",
" 'portrayed': 29,\n",
" 'cadillac': 5,\n",
" 'gave': 153,\n",
" 'entire': 51,\n",
" 'salary': 12,\n",
" 'organization': 312,\n",
" 'inaugural': 16,\n",
" 'ball': 34,\n",
" 'obsessed': 5,\n",
" 'scene': 44,\n",
" 'donate': 27,\n",
" 'played': 193,\n",
" 'portray': 11,\n",
" 'received': 127,\n",
" 'january': 94,\n",
" 'ali': 26,\n",
" 'nominated': 21,\n",
" '#numbernd': 61,\n",
" 'tied': 17,\n",
" 'nominations': 18,\n",
" 'telephone': 32,\n",
" 'sixth': 42,\n",
" 'else': 240,\n",
" 'tie': 16,\n",
" 'six': 59,\n",
" 'ceremony': 36,\n",
" 'artists': 90,\n",
" 'lady': 20,\n",
" 'hits': 20,\n",
" 'break': 64,\n",
" 'business': 183,\n",
" 'ways': 75,\n",
" 'landmark': 39,\n",
" 'see': 179,\n",
" 'china': 324,\n",
" 'inspired': 67,\n",
" 'stop': 135,\n",
" 'using': 296,\n",
" 'last': 412,\n",
" 'suggested': 68,\n",
" 'reports': 31,\n",
" 'performing': 27,\n",
" 'gaddafi': 152,\n",
" 'surface': 68,\n",
" 'earned': 22,\n",
" 'shows': 71,\n",
" 'became': 369,\n",
" 'stage': 80,\n",
" 'confirm': 14,\n",
" 'donations': 13,\n",
" 'listed': 84,\n",
" 'paid': 54,\n",
" 'performer': 5,\n",
" 'per': 218,\n",
" 'hoe': 6,\n",
" 'everyone': 24,\n",
" 'learn': 48,\n",
" 'performed': 96,\n",
" 'happen': 149,\n",
" 'tell': 40,\n",
" 'donation': 10,\n",
" 'privately': 7,\n",
" 'information': 160,\n",
" 'libyan': 26,\n",
" 'ruler': 88,\n",
" 'pay': 116,\n",
" 'private': 132,\n",
" 'headline': 10,\n",
" 'glastonbury': 8,\n",
" 'festival': 146,\n",
" 'fourth': 72,\n",
" 'debuted': 13,\n",
" 'success': 76,\n",
" 'activity': 97,\n",
" 'four': 197,\n",
" 'nights': 23,\n",
" 'forth': 19,\n",
" 'awarded': 73,\n",
" 'writing': 96,\n",
" 'ballroom': 11,\n",
" 'write': 167,\n",
" 'story': 81,\n",
" 'earlier': 49,\n",
" 'standing': 23,\n",
" 'room': 54,\n",
" 'concerts': 26,\n",
" 'birth': 81,\n",
" 'appearance': 40,\n",
" 'giving': 32,\n",
" 'atlantic': 134,\n",
" 'daughter': 31,\n",
" 'blue': 78,\n",
" 'born': 177,\n",
" 'public': 410,\n",
" 'play': 307,\n",
" 'resort': 14,\n",
" 'compilation': 13,\n",
" 'topic': 40,\n",
" 'documentary': 18,\n",
" 'sign': 107,\n",
" 'title': 280,\n",
" 'added': 149,\n",
" 'whose': 327,\n",
" 'inauguration': 8,\n",
" 'national': 458,\n",
" 'anthem': 8,\n",
" 'minute': 13,\n",
" 'half': 92,\n",
" 'new': 1904,\n",
" 'president': 466,\n",
" 'month': 184,\n",
" 'dates': 38,\n",
" 'mrs': 5,\n",
" 'carter': 5,\n",
" 'entail': 8,\n",
" 'successful': 102,\n",
" 'tours': 11,\n",
" 'yet': 22,\n",
" 'epic': 19,\n",
" 'voiced': 11,\n",
" 'animated': 5,\n",
" 'honorary': 9,\n",
" 'chair': 24,\n",
" 'voice': 34,\n",
" 'april': 68,\n",
" 'cover': 100,\n",
" 'may': 393,\n",
" 'huge': 24,\n",
" 'surprise': 6,\n",
" 'fifth': 39,\n",
" 'consecutive': 22,\n",
" 'joined': 69,\n",
" 'run': 183,\n",
" 'reported': 117,\n",
" 'e': 63,\n",
" 'earning': 8,\n",
" 'earnings': 15,\n",
" 'digital': 78,\n",
" 'days': 146,\n",
" 'husband': 13,\n",
" 'featuring': 12,\n",
" 'jay': 25,\n",
" 'z': 21,\n",
" 'pose': 7,\n",
" 'august': 64,\n",
" 'took': 226,\n",
" 'lost': 128,\n",
" 'next': 88,\n",
" 'model': 207,\n",
" 'british': 604,\n",
" 'entertainer': 5,\n",
" 'making': 94,\n",
" 'black': 239,\n",
" 'super': 48,\n",
" 'bowl': 34,\n",
" 'formation': 61,\n",
" 'online': 62,\n",
" 'service': 291,\n",
" 'day': 446,\n",
" 'streaming': 10,\n",
" 'kind': 809,\n",
" 'platform': 36,\n",
" 'exclusively': 15,\n",
" 'together': 120,\n",
" 'pregnant': 11,\n",
" 'described': 123,\n",
" 'thing': 67,\n",
" 'endure': 9,\n",
" 'relationship': 161,\n",
" 'creating': 57,\n",
" 'speculation': 6,\n",
" 'combined': 56,\n",
" 'life': 280,\n",
" 'attended': 67,\n",
" 'confirmed': 20,\n",
" 'watched': 30,\n",
" 'pregnancy': 13,\n",
" 'broadcast': 109,\n",
" 'history': 227,\n",
" 'even': 108,\n",
" 'guinness': 13,\n",
" 'term': 906,\n",
" 'aug': 5,\n",
" 'prior': 183,\n",
" 'google': 14,\n",
" 'website': 55,\n",
" 'talked': 10,\n",
" 'struggles': 6,\n",
" 'hospital': 74,\n",
" 'delivered': 19,\n",
" 'dedicated': 56,\n",
" 'b': 48,\n",
" 'c': 90,\n",
" 'stand': 282,\n",
" 'credited': 67,\n",
" 'rally': 20,\n",
" 'presidential': 100,\n",
" 'raise': 37,\n",
" 'obama': 23,\n",
" 'club': 193,\n",
" 'endorse': 14,\n",
" 'march': 104,\n",
" 'attend': 103,\n",
" 'july': 82,\n",
" 'social': 211,\n",
" 'media': 149,\n",
" 'upload': 6,\n",
" 'picture': 26,\n",
" 'paper': 146,\n",
" 'ballot': 8,\n",
" 'interview': 9,\n",
" 'asked': 42,\n",
" 'campaign': 89,\n",
" 'encourages': 9,\n",
" 'leadership': 40,\n",
" 'quoted': 13,\n",
" 'saying': 23,\n",
" 'modern': 390,\n",
" 'say': 286,\n",
" 'contribute': 44,\n",
" 'response': 85,\n",
" 'speech': 74,\n",
" 'ban': 57,\n",
" 'encourage': 30,\n",
" 'used': 2106,\n",
" 'words': 113,\n",
" 'nigerian': 40,\n",
" 'author': 104,\n",
" 'females': 35,\n",
" 'letter': 118,\n",
" 'important': 349,\n",
" 'un': 81,\n",
" 'summit': 18,\n",
" 'focused': 50,\n",
" 'developing': 47,\n",
" 'funding': 53,\n",
" 'addressed': 20,\n",
" 'serving': 35,\n",
" 'relation': 77,\n",
" 'want': 216,\n",
" 'recipients': 6,\n",
" 'focus': 167,\n",
" 'family': 208,\n",
" 'death': 347,\n",
" 'lots': 9,\n",
" 'bail': 5,\n",
" 'prison': 19,\n",
" \"who's\": 64,\n",
" 'protest': 54,\n",
" 'spend': 82,\n",
" 'june': 100,\n",
" 'total': 177,\n",
" 'worth': 56,\n",
" 'celebrity': 11,\n",
" 'list': 102,\n",
" 'couple': 13,\n",
" 'net': 27,\n",
" 'began': 203,\n",
" 'reporting': 16,\n",
" 'starting': 65,\n",
" 'ever': 75,\n",
" 'predicted': 22,\n",
" 'billion': 32,\n",
" 'range': 174,\n",
" 'octaves': 5,\n",
" 'distinctive': 16,\n",
" 'critic': 22,\n",
" 'era': 291,\n",
" 'influenced': 173,\n",
" 'style': 286,\n",
" 'rosen': 6,\n",
" 'daily': 82,\n",
" 'mail': 20,\n",
" 'claim': 154,\n",
" 'span': 42,\n",
" 'york': 394,\n",
" \"times'\": 8,\n",
" 'jon': 5,\n",
" 'calls': 26,\n",
" 'vocal': 17,\n",
" 'generally': 138,\n",
" 'categorized': 9,\n",
" 'besides': 265,\n",
" 'genres': 16,\n",
" 'mostly': 87,\n",
" 'releases': 24,\n",
" 'english': 473,\n",
" 'language': 836,\n",
" 'spanish': 185,\n",
" 'american': 844,\n",
" 'mainly': 66,\n",
" 'sung': 18,\n",
" 'usually': 288,\n",
" 'several': 111,\n",
" 'recordings': 67,\n",
" 'come': 375,\n",
" 'aspect': 90,\n",
" 'example': 317,\n",
" 'aimed': 18,\n",
" 'towards': 89,\n",
" 'male': 62,\n",
" 'audience': 37,\n",
" 'theme': 52,\n",
" 'early': 494,\n",
" 'themes': 11,\n",
" 'credits': 15,\n",
" 'production': 177,\n",
" 'addition': 177,\n",
" 'co': 66,\n",
" 'rather': 72,\n",
" 'things': 131,\n",
" 'producers': 15,\n",
" 'songwriter': 7,\n",
" 'african': 181,\n",
" 'credit': 59,\n",
" 'influence': 177,\n",
" 'michael': 31,\n",
" 'jackson': 16,\n",
" 'kid': 7,\n",
" 'tribute': 20,\n",
" 'cites': 5,\n",
" 'mariah': 7,\n",
" 'carey': 8,\n",
" 'biggest': 112,\n",
" 'feel': 104,\n",
" 'around': 270,\n",
" 'inspiration': 32,\n",
" 'practice': 114,\n",
" 'runs': 75,\n",
" 'honor': 42,\n",
" 'entertaining': 8,\n",
" 'motivated': 13,\n",
" 'wearing': 26,\n",
" 'baker': 11,\n",
" 'noted': 49,\n",
" 'madonna': 160,\n",
" 'said': 292,\n",
" 'definition': 123,\n",
" 'strong': 77,\n",
" 'influences': 35,\n",
" 'jean': 13,\n",
" 'michel': 5,\n",
" 'raw': 8,\n",
" 'singers': 14,\n",
" 'background': 22,\n",
" 'musicians': 26,\n",
" 'introduce': 60,\n",
" 'share': 131,\n",
" 'supports': 19,\n",
" 'characteristics': 55,\n",
" 'acclaim': 5,\n",
" 'former': 173,\n",
" 'def': 7,\n",
" 'greatest': 73,\n",
" 'alive': 18,\n",
" 'praise': 7,\n",
" 'chose': 14,\n",
" 'dancers': 6,\n",
" 'l': 16,\n",
" 'alice': 5,\n",
" 'jones': 27,\n",
" 'self': 119,\n",
" 'proclaimed': 16,\n",
" 'according': 467,\n",
" 'away': 126,\n",
" 'back': 244,\n",
" 'created': 558,\n",
" 'longer': 88,\n",
" 'needed': 96,\n",
" 'sex': 49,\n",
" 'appeal': 25,\n",
" 'characterized': 37,\n",
" 'journalist': 22,\n",
" 'symbol': 42,\n",
" 'word': 447,\n",
" 'oxford': 21,\n",
" 'dictionary': 20,\n",
" '#numbers': 442,\n",
" 'often': 490,\n",
" 'physical': 79,\n",
" 'shape': 68,\n",
" 'slang': 5,\n",
" 'put': 127,\n",
" 'likes': 5,\n",
" 'dress': 24,\n",
" 'september': 70,\n",
" 'area': 739,\n",
" 'exploring': 8,\n",
" \"world's\": 101,\n",
" 'feature': 224,\n",
" 'tv': 93,\n",
" 'hottest': 14,\n",
" 'tom': 31,\n",
" 'ford': 17,\n",
" 'complex': 61,\n",
" 'museum': 173,\n",
" 'models': 43,\n",
" 'wax': 5,\n",
" 'parent': 27,\n",
" 'help': 284,\n",
" 'posed': 5,\n",
" 'si': 9,\n",
" \"mother's\": 24,\n",
" 'sports': 117,\n",
" 'illustrated': 6,\n",
" 'dressed': 6,\n",
" 'fan': 23,\n",
" 'base': 121,\n",
" 'referred': 176,\n",
" 'fans': 44,\n",
" 'know': 97,\n",
" 'latest': 31,\n",
" 'given': 354,\n",
" 'derive': 32,\n",
" 'clothing': 90,\n",
" 'line': 242,\n",
" 'seen': 158,\n",
" 'controversy': 46,\n",
" 'spark': 7,\n",
" 'criticize': 22,\n",
" 'tribal': 17,\n",
" 'makeup': 7,\n",
" 'drew': 17,\n",
" 'criticism': 49,\n",
" 'racial': 68,\n",
" 'community': 172,\n",
" 'professor': 48,\n",
" 'northeastern': 7,\n",
" 'university': 616,\n",
" 'criticisms': 5,\n",
" 'accused': 93,\n",
" 'coloring': 6,\n",
" 'hair': 28,\n",
" 'vogue': 6,\n",
" 'request': 30,\n",
" 'respond': 45,\n",
" 'accusations': 10,\n",
" 'changing': 40,\n",
" 'pictures': 32,\n",
" 'light': 265,\n",
" 'skin': 27,\n",
" 'color': 271,\n",
" 'believes': 47,\n",
" 'involve': 23,\n",
" 'well': 178,\n",
" 'supposedly': 24,\n",
" 'use': 1234,\n",
" 'natural': 169,\n",
" 'images': 21,\n",
" 'bestowed': 5,\n",
" 'upon': 139,\n",
" 'whats': 11,\n",
" 'guardian': 13,\n",
" 'reigning': 11,\n",
" 'stated': 81,\n",
" '#numberst': 90,\n",
" 'publication': 86,\n",
" 'heir': 14,\n",
" 'apparent': 6,\n",
" 'united': 583,\n",
" 'states': 729,\n",
" 'rock': 132,\n",
" 'cited': 26,\n",
" 'friend': 20,\n",
" 'learned': 29,\n",
" 'country': 956,\n",
" 'brand': 22,\n",
" 'soda': 5,\n",
" 'seeing': 12,\n",
" 'involved': 155,\n",
" 'join': 143,\n",
" 'pepsi': 11,\n",
" 'global': 84,\n",
" 'studying': 49,\n",
" 'indie': 7,\n",
" 'white': 178,\n",
" 'studied': 73,\n",
" 'live': 361,\n",
" 'research': 207,\n",
" 'commercial': 95,\n",
" 'crazy': 8,\n",
" 'selling': 49,\n",
" 'organism': 56,\n",
" 'roll': 13,\n",
" 'hall': 94,\n",
" 'considers': 12,\n",
" 'us': 783,\n",
" 'legend': 22,\n",
" 'throughout': 89,\n",
" 'without': 165,\n",
" 'holds': 68,\n",
" 'wins': 28,\n",
" 'night': 77,\n",
" 'bring': 77,\n",
" 'actress': 18,\n",
" 'partnered': 14,\n",
" 'endorsement': 10,\n",
" 'percentage': 779,\n",
" 'positive': 64,\n",
" 'agree': 81,\n",
" 'million': 100,\n",
" 'sent': 144,\n",
" 'asking': 7,\n",
" 'soft': 44,\n",
" 'drink': 28,\n",
" 'change': 327,\n",
" 'mind': 30,\n",
" 'due': 239,\n",
" 'nature': 90,\n",
" 'product': 114,\n",
" 'advertisements': 7,\n",
" '#number%': 134,\n",
" 'true': 76,\n",
" 'gold': 68,\n",
" 'belongs': 9,\n",
" 'fragrance': 5,\n",
" 'limited': 72,\n",
" 'edition': 46,\n",
" 'heat': 56,\n",
" 'rush': 15,\n",
" 'editions': 13,\n",
" 'launched': 103,\n",
" 'promote': 55,\n",
" 'exist': 140,\n",
" 'young': 81,\n",
" 'acquired': 47,\n",
" 'deals': 18,\n",
" 'express': 44,\n",
" 'game': 318,\n",
" 'cancelled': 13,\n",
" 'brands': 7,\n",
" 'jobs': 56,\n",
" 'suit': 14,\n",
" 'settled': 42,\n",
" 'producing': 47,\n",
" 'backing': 13,\n",
" 'disagreement': 14,\n",
" 'court': 360,\n",
" 'agreement': 165,\n",
" 'athletic': 29,\n",
" 'ltd': 8,\n",
" 'products': 101,\n",
" 'stores': 69,\n",
" 'partner': 53,\n",
" 'london': 348,\n",
" 'launch': 74,\n",
" 'division': 123,\n",
" 'partnership': 30,\n",
" 'owner': 28,\n",
" 'tidal': 11,\n",
" 'ownership': 25,\n",
" 'services': 117,\n",
" 'system': 897,\n",
" 'owns': 51,\n",
" 'providing': 24,\n",
" 'low': 148,\n",
" 'royalty': 8,\n",
" 'amounts': 17,\n",
" 'house': 368,\n",
" 'relatives': 18,\n",
" \"family's\": 7,\n",
" 'types': 301,\n",
" 'purchase': 70,\n",
" 'items': 41,\n",
" 'displayed': 43,\n",
" 'shares': 33,\n",
" 'fashion': 35,\n",
" 'introduction': 31,\n",
" 'junior': 11,\n",
" 'collection': 65,\n",
" 'accessory': 11,\n",
" 'shopping': 27,\n",
" 'introduced': 199,\n",
" 'idea': 146,\n",
" 'brazil': 67,\n",
" 'add': 70,\n",
" 'shoes': 6,\n",
" 'fashions': 5,\n",
" 'mobile': 25,\n",
" 'team': 418,\n",
" \"macy's\": 5,\n",
" 'store': 137,\n",
" 'outdoor': 15,\n",
" 'full': 120,\n",
" 'contract': 61,\n",
" 'england': 262,\n",
" 'equal': 44,\n",
" 'disaster': 35,\n",
" 'foundation': 107,\n",
" 'cash': 7,\n",
" 'startup': 5,\n",
" 'hurricane': 25,\n",
" 'katrina': 7,\n",
" 'provide': 202,\n",
" 'support': 307,\n",
" 'found': 641,\n",
" 'initially': 92,\n",
" 'recent': 100,\n",
" 'beginning': 128,\n",
" 'participate': 44,\n",
" 'hope': 40,\n",
" 'haiti': 6,\n",
" 'benefit': 105,\n",
" 'opened': 101,\n",
" 'center': 320,\n",
" 'location': 160,\n",
" 'bin': 16,\n",
" 'image': 32,\n",
" 'enterprise': 15,\n",
" 'charity': 28,\n",
" 'benefited': 16,\n",
" 'god': 232,\n",
" 'usa': 34,\n",
" 'george': 131,\n",
" 'stars': 18,\n",
" 'earthquake': 79,\n",
" 'open': 222,\n",
" 'brooklyn': 14,\n",
" 'phoenix': 5,\n",
" 'lee': 73,\n",
" 'laden': 5,\n",
" 'killed': 160,\n",
" 'demand': 42,\n",
" 'plan': 161,\n",
" 'contributing': 14,\n",
" 'food': 192,\n",
" 'held': 412,\n",
" 'speaking': 83,\n",
" 'gift': 5,\n",
" 'finding': 21,\n",
" 'qualities': 17,\n",
" 'every': 144,\n",
" 'human': 317,\n",
" 'followed': 53,\n",
" 'tragic': 7,\n",
" 'others': 75,\n",
" 'gun': 44,\n",
" 'shooting': 18,\n",
" 'prompted': 32,\n",
" 'creation': 86,\n",
" 'humanitarian': 8,\n",
" \"frédéric's\": 30,\n",
" 'nationalities': 8,\n",
" 'frédéric': 84,\n",
" 'active': 90,\n",
" 'instrument': 85,\n",
" 'primarily': 101,\n",
" 'depart': 15,\n",
" 'poland': 64,\n",
" 'chopin': 328,\n",
" 'compose': 20,\n",
" 'die': 263,\n",
" \"chopin's\": 158,\n",
" 'majority': 207,\n",
" ...}"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"vocab_count"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 第二部分: 文本的表示\n",
"当我们做完必要的文本处理之后就需要想办法表示文本了,这里有几种方式\n",
"\n",
"- 1. 使用```tf-idf vector```\n",
"- 2. 使用embedding技术如```word2vec```, ```bert embedding```等\n",
"\n",
"下面我们分别提取这三个特征来做对比。 "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 2.1 使用tf-idf表示向量\n",
"把```qlist```中的每一个问题的字符串转换成```tf-idf```向量, 转换之后的结果存储在```X```矩阵里。 ``X``的大小是: ``N* D``的矩阵。 这里``N``是问题的个数(样本个数),\n",
"``D``是词典库的大小"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"10522\n",
"[0.0477289 0.00397813 0.00640568 ... 0. 0. 0. ]\n",
"(86821, 10522)\n"
]
}
],
"source": [
"# TODO \n",
"def computeTF(vocab,c):\n",
" TF = np.ones(len(vocab))\n",
" word2id = dict()\n",
" id2word = dict()\n",
" for word, fre in vocab.items():\n",
" TF[len(word2id)] = 1.0 * fre / c\n",
" id2word[len(word2id)] = word\n",
" word2id[word] = len(word2id)\n",
" return TF,word2id,id2word\n",
"\n",
"def computeIDF(word2id,qlist):\n",
" IDF = np.ones(len(word2id))\n",
" for q in qlist:\n",
" words = set(q.strip().split())\n",
" for w in words:\n",
" IDF[word2id[w]] +=1\n",
" \n",
" IDF /= len(qlist)\n",
" IDF = -1.0 * np.log2(IDF)\n",
" return IDF\n",
"\n",
"def computeSentenceEach(sentence,tfidf,word2id):\n",
" sentence_tfidf = np.zeros(len(word2id))\n",
" for w in sentence.strip().split(' '):\n",
" if w not in word2id:\n",
" continue\n",
" sentence_tfidf[word2id[w]] = tfidf[word2id[w]]\n",
" return sentence_tfidf\n",
"\n",
"def computeSentence(qlist,word2id,tfidf):\n",
" X_tfidf = np.zeros((len(qlist),len(word2id)))\n",
" for i,q in enumerate(qlist):\n",
" X_tfidf[i] = computeSentenceEach(q,tfidf,word2id)\n",
" return X_tfidf \n",
" \n",
"TF,word2id,id2word = computeTF(vocab_count,count)\n",
"print(len(word2id))\n",
"IDF = computeIDF(word2id,qlist)\n",
"vectorizer = np.multiply(TF,IDF)# 定义一个tf-idf的vectorizer\n",
"X_tfidf = computeSentence(qlist,word2id,vectorizer)# 结果存放在X矩阵里\n",
"print(X_tfidf[0])\n",
"print(X_tfidf.shape)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 2.2 使用wordvec + average pooling\n",
"词向量方面需要下载: https://nlp.stanford.edu/projects/glove/ (请下载``glove.6B.zip``),并使用``d=200``的词向量(200维)。国外网址如果很慢,可以在百度上搜索国内服务器上的。 每个词向量获取完之后,即可以得到一个句子的向量。 我们通过``average pooling``来实现句子的向量。 "
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [],
"source": [
"# TODO 基于Glove向量获取句子向量\n",
"from gensim.models import KeyedVectors\n",
"from gensim.scripts.glove2word2vec import glove2word2vec\n",
"\n",
"def loadEmbedding(filename):\n",
" word2vec_temp_file = 'word2vec_temp.txt'\n",
" glove2word2vec(filename,word2vec_temp_file)\n",
" model = KeyedVectors.load_word2vec_format(word2vec_temp_file)\n",
" return model\n",
"\n",
"def computeGloveSentenceEach(sentence,embedding):\n",
" emb = np.zeros(200)\n",
" words = sentence.strip().split(' ')\n",
" for w in words:\n",
" if w not in embedding:\n",
" w = 'unknown'\n",
" emb += embedding[w]\n",
" return emb / len(words)\n",
" \n",
"def computeGloveSentence(qlist,embedding):\n",
" X_w2v = np.zeros((len(qlist),200))\n",
" for i,q in enumerate(qlist):\n",
" X_w2v[i] = computeGloveSentenceEach(q,embedding)\n",
" return X_w2v\n",
"emb = loadEmbedding('glove.6B.200d.txt')# 这是 D*H的矩阵,这里的D是词典库的大小, H是词向量的大小。 这里面我们给定的每个单词的词向量,\n",
" # 这需要从文本中读取\n",
" \n",
"X_w2v = computeGloveSentence(qlist,emb)# 初始化完emb之后就可以对每一个句子来构建句子向量了,这个过程使用average pooling来实现\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 2.3 使用BERT + average pooling\n",
"最近流行的BERT也可以用来学出上下文相关的词向量(contex-aware embedding), 在很多问题上得到了比较好的结果。在这里,我们不做任何的训练,而是直接使用已经训练好的BERT embedding。 具体如何训练BERT将在之后章节里体会到。 为了获取BERT-embedding,可以直接下载已经训练好的模型从而获得每一个单词的向量。可以从这里获取: https://github.com/imgarylai/bert-embedding , 请使用```bert_12_768_12```\t当然,你也可以从其他source获取也没问题,只要是合理的词向量。 "
]
},
{
"cell_type": "code",
"execution_count": 52,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Downloading /Users/fanhang/.mxnet/models/bert_12_768_12_wiki_multilingual_cased-b0f57a20.zip from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/bert_12_768_12_wiki_multilingual_cased-b0f57a20.zip...\n"
]
},
{
"ename": "NameError",
"evalue": "name 'bert_embedding' is not defined",
"output_type": "error",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mNameError\u001b[0m Traceback (most recent call last)",
"\u001b[0;32m<ipython-input-52-a2ea91c400c3>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[1;32m 4\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 5\u001b[0m \u001b[0mX_bert\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mBertEmbedding\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mmodel\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;34m'bert_12_768_12'\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mdataset_name\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;34m'wiki_multilingual_cased'\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;31m# 每一个句子的向量结果存放在X_bert矩阵里。行数为句子的总个数,列数为一个句子embedding大小。\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 6\u001b[0;31m \u001b[0mall_embedding\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mbert_embedding\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mqlist\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m'sum'\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 7\u001b[0m \u001b[0;32mfor\u001b[0m \u001b[0mi\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mrange\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mlen\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mall_embedding\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 8\u001b[0m \u001b[0msentence_embedding\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mi\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mnp\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0msum\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mall_embedding\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mi\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0maxis\u001b[0m \u001b[0;34m=\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m/\u001b[0m \u001b[0mlen\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mq\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mstrip\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0msplit\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m' '\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;31mNameError\u001b[0m: name 'bert_embedding' is not defined"
]
}
],
"source": [
"# TODO 基于BERT的句子向量计算\n",
"from bert_embedding import BertEmbedding\n",
"sentence_embedding = np.ones((len(qlist),768))\n",
"\n",
"X_bert = BertEmbedding(model='bert_12_768_12', dataset_name='wiki_multilingual_cased')# 每一个句子的向量结果存放在X_bert矩阵里。行数为句子的总个数,列数为一个句子embedding大小。 \n",
"all_embedding = bert_embedding(qlist,'sum')\n",
"for i in range(len(all_embedding)):\n",
" sentence_embedding[i] = np.sum(all_embedding[i][1],axis =0) / len(q.strip().split(' '))\n",
" if i == 0:\n",
" print(sentence_embedding[i])\n",
" \n",
"X_bert = sentence_embedding "
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": true
},
"source": [
"### 第三部分: 相似度匹配以及搜索\n",
"在这部分里,我们需要把用户每一个输入跟知识库里的每一个问题做一个相似度计算,从而得出最相似的问题。但对于这个问题,时间复杂度其实很高,所以我们需要结合倒排表来获取相似度最高的问题,从而获得答案。"
]
},
{
"cell_type": "code",
"execution_count": 33,
"metadata": {},
"outputs": [],
"source": [
"import queue as Q\n",
"que = Q.PriorityQueue()\n",
"def cosineSimilarity(vec1,vec2):\n",
" return np.dot(vec1,vec2.T)/(np.sqrt(np.sum(vec1**2))*np.sqrt(np.sum(vec2**2)))\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 3.1 tf-idf + 余弦相似度\n",
"我们可以直接基于计算出来的``tf-idf``向量,计算用户最新问题与库中存储的问题之间的相似度,从而选择相似度最高的问题的答案。这个方法的复杂度为``O(N)``, ``N``是库中问题的个数。"
]
},
{
"cell_type": "code",
"execution_count": 34,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[3, 77739, 80413, 39743, 28215]\n",
"['Houston, Texas' 'Umma' 'Sicily' 'all' 'Melbourne']\n"
]
}
],
"source": [
"def get_top_results_tfidf_noindex(query):\n",
" # TODO 需要编写\n",
" \"\"\"\n",
" 给定用户输入的问题 query, 返回最有可能的TOP 5问题。这里面需要做到以下几点:\n",
" 1. 对于用户的输入 query 首先做一系列的预处理(上面提到的方法),然后再转换成tf-idf向量(利用上面的vectorizer)\n",
" 2. 计算跟每个库里的问题之间的相似度\n",
" 3. 找出相似度最高的top5问题的答案\n",
" \"\"\"\n",
" top = 5\n",
" query_tfidf = computeSentenceEach(query.lower(),vectorizer,word2id)\n",
" for i,vec in enumerate(X_tfidf):\n",
" result = cosineSimilarity(vec,query_tfidf)\n",
" que.put((-1 * result,i))\n",
" \n",
" i = 0\n",
" top_idxs = []\n",
" while(i<top and not que.empty()):\n",
" top_idxs.append(que.get()[1])\n",
" i += 1\n",
" print(top_idxs)\n",
" return np.array(alist)[top_idxs]\n",
" \n",
"results = get_top_results_tfidf_noindex('In what city and state did Beyonce grow up')\n",
"print(results)\n",
" \n",
"# top_idxs = [] # top_idxs存放相似度最高的(存在qlist里的)问题的下标 \n",
" # hint: 请使用 priority queue来找出top results. 思考为什么可以这么做? \n",
" \n",
"# return alist[top_idxs] # 返回相似度最高的问题对应的答案,作为TOP5答案 "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# TODO: 编写几个测试用例,并输出结果\n",
"print (get_top_results_tfidf_noindex(\"\"))\n",
"print (get_top_results_tfidf_noindex(\"\"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"你会发现上述的程序很慢,没错! 是因为循环了所有库里的问题。为了优化这个过程,我们需要使用一种数据结构叫做```倒排表```。 使用倒排表我们可以把单词和出现这个单词的文档做关键。 之后假如要搜索包含某一个单词的文档,即可以非常快速的找出这些文档。 在这个QA系统上,我们首先使用倒排表来快速查找包含至少一个单词的文档,然后再进行余弦相似度的计算,即可以大大减少```时间复杂度```。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 3.2 倒排表的创建\n",
"倒排表的创建其实很简单,最简单的方法就是循环所有的单词一遍,然后记录每一个单词所出现的文档,然后把这些文档的ID保存成list即可。我们可以定义一个类似于```hash_map```, 比如 ``inverted_index = {}``, 然后存放包含每一个关键词的文档出现在了什么位置,也就是,通过关键词的搜索首先来判断包含这些关键词的文档(比如出现至少一个),然后对于candidates问题做相似度比较。"
]
},
{
"cell_type": "code",
"execution_count": 35,
"metadata": {},
"outputs": [],
"source": [
"# TODO 请创建倒排表\n",
"word_doc = dict()\n",
"for i,q in enumerate(qlist):\n",
" words = q.strip().split(' ')\n",
" for w in set(words):\n",
" if w not in word_doc:\n",
" word_doc[w] = set([])\n",
" word_doc[w] = word_doc[w] | set([i])\n",
"inverted_idx = word_doc # 定一个一个简单的倒排表,是一个map结构。 循环所有qlist一遍就可以"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 3.3 语义相似度\n",
"这里有一个问题还需要解决,就是语义的相似度。可以这么理解: 两个单词比如car, auto这两个单词长得不一样,但从语义上还是类似的。如果只是使用倒排表我们不能考虑到这些单词之间的相似度,这就导致如果我们搜索句子里包含了``car``, 则我们没法获取到包含auto的所有的文档。所以我们希望把这些信息也存下来。那这个问题如何解决呢? 其实也不难,可以提前构建好相似度的关系,比如对于``car``这个单词,一开始就找好跟它意思上比较类似的单词比如top 10,这些都标记为``related words``。所以最后我们就可以创建一个保存``related words``的一个``map``. 比如调用``related_words['car']``即可以调取出跟``car``意思上相近的TOP 10的单词。 \n",
"\n",
"那这个``related_words``又如何构建呢? 在这里我们仍然使用``Glove``向量,然后计算一下俩俩的相似度(余弦相似度)。之后对于每一个词,存储跟它最相近的top 10单词,最终结果保存在``related_words``里面。 这个计算需要发生在离线,因为计算量很大,复杂度为``O(V*V)``, V是单词的总数。 \n",
"\n",
"这个计算过程的代码请放在``related.py``的文件里,然后结果保存在``related_words.txt``里。 我们在使用的时候直接从文件里读取就可以了,不用再重复计算。所以在此notebook里我们就直接读取已经计算好的结果。 作业提交时需要提交``related.py``和``related_words.txt``文件,这样在使用的时候就不再需要做这方面的计算了。"
]
},
{
"cell_type": "code",
"execution_count": 45,
"metadata": {},
"outputs": [],
"source": [
"# TODO 读取语义相关的单词\n",
"def get_related_words(file):\n",
" related_words = {}\n",
" with codecs.open(filename,'r','utf8') as Fin:\n",
" lines = Fin.readlines()\n",
" for line in lines:\n",
" words = line.strip().split(' ')\n",
" related_words[words[0]] = words[1:]\n",
" return related_words\n",
"\n",
"related_words = {} \n",
"#related_words = get_related_words('related_words.txt') # 直接放在文件夹的根目录下,不要修改此路径。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 3.4 利用倒排表搜索\n",
"在这里,我们使用倒排表先获得一批候选问题,然后再通过余弦相似度做精准匹配,这样一来可以节省大量的时间。搜索过程分成两步:\n",
"\n",
"- 使用倒排表把候选问题全部提取出来。首先,对输入的新问题做分词等必要的预处理工作,然后对于句子里的每一个单词,从``related_words``里提取出跟它意思相近的top 10单词, 然后根据这些top词从倒排表里提取相关的文档,把所有的文档返回。 这部分可以放在下面的函数当中,也可以放在外部。\n",
"- 然后针对于这些文档做余弦相似度的计算,最后排序并选出最好的答案。\n",
"\n",
"可以适当定义自定义函数,使得减少重复性代码"
]
},
{
"cell_type": "code",
"execution_count": 54,
"metadata": {},
"outputs": [],
"source": [
"def cosineSimilarity(vec1,vec2):\n",
" return np.dot(vec1,vec2.T)/(np.sqrt(np.sum(vec1**2))*np.sqrt(np.sum(vec2**2)))\n",
"\n",
"def getCandidate(query):\n",
" searched = set()\n",
" for w in query.lower().strip().split(' '):\n",
" if w not in word2id or w not in inverted_idx:\n",
" continue\n",
" if len(searched) == 0:\n",
" searched = set(inverted_idx[w])\n",
" else:\n",
" searched = searched & set(inverted_idx[w])\n",
" if w in related_words:\n",
" for similar in related_words[w]:\n",
" searched = searched & set(inverted_idx[similar])\n",
" return searched\n",
"def get_top_results_tfidf(query):\n",
" \"\"\"\n",
" 给定用户输入的问题 query, 返回最有可能的TOP 5问题。这里面需要做到以下几点:\n",
" 1. 利用倒排表来筛选 candidate (需要使用related_words). \n",
" 2. 对于候选文档,计算跟输入问题之间的相似度\n",
" 3. 找出相似度最高的top5问题的答案\n",
" \"\"\"\n",
" top = 5\n",
" query_tfidf = computeSentenceEach(query,vectorizer,word2id)\n",
" results = Q.PriorityQueue()\n",
" searched = getCandidate(query)\n",
" for candidate in searched:\n",
" result = cosineSimilarity(query_tfidf,X_tfidf[candidate])\n",
" results.put((-1*result,candidate))\n",
" top_idxs = [] # top_idxs存放相似度最高的(存在qlist里的)问题的下表 \n",
" # hint: 利用priority queue来找出top results. 思考为什么可以这么做? \n",
" i = 0\n",
" while i < top and not results.empty():\n",
" top_idxs.append(results.get()[1])\n",
" i +=1\n",
" \n",
" return np.array(alist)[top_idxs] \n",
" \n",
"# return alist[top_idxs] # 返回相似度最高的问题对应的答案,作为TOP5答案"
]
},
{
"cell_type": "code",
"execution_count": 55,
"metadata": {},
"outputs": [],
"source": [
"def get_top_results_w2v(query):\n",
" \"\"\"\n",
" 给定用户输入的问题 query, 返回最有可能的TOP 5问题。这里面需要做到以下几点:\n",
" 1. 利用倒排表来筛选 candidate (需要使用related_words). \n",
" 2. 对于候选文档,计算跟输入问题之间的相似度\n",
" 3. 找出相似度最高的top5问题的答案\n",
" \"\"\"\n",
" top = 5\n",
" emb = loadEmbedding('glove.6B.200d.txt')# 这是 D*H的矩阵,这里的D是词典库的大小, H是词向量的大小。 这里面我们给定的每个单词的词向量,\n",
" # 这需要从文本中读取\n",
" \n",
" query_emb = computeGloveSentence(query,emb)\n",
" results = Q.PriorityQueue()\n",
" searched = getCandidate(query)\n",
" for candidate in searched:\n",
" result = cosineSimilarity(query_emb,X_w2v[candidate])\n",
" results.put((-1*result,candidate))\n",
" top_idxs = [] # top_idxs存放相似度最高的(存在qlist里的)问题的下表 \n",
" # hint: 利用priority queue来找出top results. 思考为什么可以这么做? \n",
" i = 0\n",
" while i < top and not results.empty():\n",
" top_idxs.append(results.get()[1])\n",
" i +=1\n",
" \n",
" return np.array(alist)[top_idxs] \n",
"\n",
" \n",
"# return alist[top_idxs] # 返回相似度最高的问题对应的答案,作为TOP5答案"
]
},
{
"cell_type": "code",
"execution_count": 56,
"metadata": {},
"outputs": [],
"source": [
"def get_top_results_bert(query):\n",
" \"\"\"\n",
" 给定用户输入的问题 query, 返回最有可能的TOP 5问题。这里面需要做到以下几点:\n",
" 1. 利用倒排表来筛选 candidate (需要使用related_words). \n",
" 2. 对于候选文档,计算跟输入问题之间的相似度\n",
" 3. 找出相似度最高的top5问题的答案\n",
" \"\"\"\n",
" top = 5\n",
" query_emb = np.sum(bert_embedding([query],'sum')[0][1],axis = 0) / len(query.strip().split())\n",
" results = Q.PriorityQueue()\n",
" searched = getCandidate(query)\n",
" for candidate in searched:\n",
" result = cosineSimilarity(query_emb,X_bert[candidate])\n",
" results.put((-1*result,candidate)) \n",
" top_idxs = [] # top_idxs存放相似度最高的(存在qlist里的)问题的下表 \n",
" # hint: 利用priority queue来找出top results. 思考为什么可以这么做? \n",
" i = 0\n",
" while i < top and not results.empty():\n",
" top_idxs.append(results.get()[1])\n",
" i +=1\n",
" return np.array(alist)[top_idxs] \n",
" \n",
"# return alist[top_idxs] # 返回相似度最高的问题对应的答案,作为TOP5答案"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# TODO: 编写几个测试用例,并输出结果\n",
"\n",
"test_query1 = \"\"\n",
"test_query2 = \"\"\n",
"\n",
"print (get_top_results_tfidf(test_query1))\n",
"print (get_top_results_w2v(test_query1))\n",
"print (get_top_results_bert(test_query1))\n",
"\n",
"print (get_top_results_tfidf(test_query2))\n",
"print (get_top_results_w2v(test_query2))\n",
"print (get_top_results_bert(test_query2))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 4. 拼写纠错\n",
"其实用户在输入问题的时候,不能期待他一定会输入正确,有可能输入的单词的拼写错误的。这个时候我们需要后台及时捕获拼写错误,并进行纠正,然后再通过修正之后的结果再跟库里的问题做匹配。这里我们需要实现一个简单的拼写纠错的代码,然后自动去修复错误的单词。\n",
"\n",
"这里使用的拼写纠错方法是课程里讲过的方法,就是使用noisy channel model。 我们回想一下它的表示:\n",
"\n",
"$c^* = \\text{argmax}_{c\\in candidates} ~~p(c|s) = \\text{argmax}_{c\\in candidates} ~~p(s|c)p(c)$\n",
"\n",
"这里的```candidates```指的是针对于错误的单词的候选集,这部分我们可以假定是通过edit_distance来获取的(比如生成跟当前的词距离为1/2的所有的valid 单词。 valid单词可以定义为存在词典里的单词。 ```c```代表的是正确的单词, ```s```代表的是用户错误拼写的单词。 所以我们的目的是要寻找出在``candidates``里让上述概率最大的正确写法``c``。 \n",
"\n",
"$p(s|c)$,这个概率我们可以通过历史数据来获得,也就是对于一个正确的单词$c$, 有百分之多少人把它写成了错误的形式1,形式2... 这部分的数据可以从``spell_errors.txt``里面找得到。但在这个文件里,我们并没有标记这个概率,所以可以使用uniform probability来表示。这个也叫做channel probability。\n",
"\n",
"$p(c)$,这一项代表的是语言模型,也就是假如我们把错误的$s$,改造成了$c$, 把它加入到当前的语句之后有多通顺?在本次项目里我们使用bigram来评估这个概率。 举个例子: 假如有两个候选 $c_1, c_2$, 然后我们希望分别计算出这个语言模型的概率。 由于我们使用的是``bigram``, 我们需要计算出两个概率,分别是当前词前面和后面词的``bigram``概率。 用一个例子来表示:\n",
"\n",
"给定: ``We are go to school tomorrow``, 对于这句话我们希望把中间的``go``替换成正确的形式,假如候选集里有个,分别是``going``, ``went``, 这时候我们分别对这俩计算如下的概率:\n",
"$p(going|are)p(to|going)$和 $p(went|are)p(to|went)$, 然后把这个概率当做是$p(c)$的概率。 然后再跟``channel probability``结合给出最终的概率大小。\n",
"\n",
"那这里的$p(are|going)$这些bigram概率又如何计算呢?答案是训练一个语言模型! 但训练一个语言模型需要一些文本数据,这个数据怎么找? 在这次项目作业里我们会用到``nltk``自带的``reuters``的文本类数据来训练一个语言模型。当然,如果你有资源你也可以尝试其他更大的数据。最终目的就是计算出``bigram``概率。 "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 4.1 训练一个语言模型\n",
"在这里,我们使用``nltk``自带的``reuters``数据来训练一个语言模型。 使用``add-one smoothing``"
]
},
{
"cell_type": "code",
"execution_count": 91,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[nltk_data] Downloading package reuters to /Users/fanhang/nltk_data...\n",
"[nltk_data] Package reuters is already up-to-date!\n",
"[nltk_data] Downloading package punkt to /Users/fanhang/nltk_data...\n",
"[nltk_data] Package punkt is already up-to-date!\n",
"['<s>', 'ASIAN', 'EXPORTERS', 'FEAR', 'DAMAGE', 'FROM', 'U', '.', 'S', '.-', 'JAPAN', 'RIFT', 'Mounting', 'trade', 'friction', 'between', 'the', 'U', '.', 'S', '.', 'And', 'Japan', 'has', 'raised', 'fears', 'among', 'many', 'of', 'Asia', \"'\", 's', 'exporting', 'nations', 'that', 'the', 'row', 'could', 'inflict', 'far', '-', 'reaching', 'economic', 'damage', ',', 'businessmen', 'and', 'officials', 'said', '.', '<s>']\n",
"unigram done\n",
"3.1432702583768155e-05\n"
]
}
],
"source": [
"import nltk\n",
"nltk.download('reuters')\n",
"nltk.download('punkt')\n",
"import numpy as np\n",
"import codecs\n",
"\n",
"# 读取语料库的数据\n",
"categories = reuters.categories()\n",
"corpus = reuters.sents(categories=categories)\n",
"\n",
"# 循环所有的语料库并构建bigram probability. bigram[word1][word2]: 在word1出现的情况下下一个是word2的概率。 \n",
"new_corpus = []\n",
"for sent in corpus:\n",
" new_corpus.append(['<s>'] + sent + ['<s>'])\n",
" \n",
"print(new_corpus[0])\n",
"word2id = dict()\n",
"id2word = dict()\n",
"for sent in new_corpus:\n",
" for w in sent:\n",
" w = w.lower()\n",
" if w in word2id:\n",
" continue\n",
" id2word[len(word2id)] = w\n",
" word2id[w] = len(word2id)\n",
"\n",
"vocab_size = len(word2id)\n",
"count_uni = np.zeros(vocab_size)\n",
"count_bi = np.zeros((vocab_size,vocab_size))\n",
"\n",
"for sent in new_corpus:\n",
" for i,w in enumerate(sent):\n",
" w = w.lower()\n",
" count_uni[word2id[w]] +=1\n",
" if i < len(sent) -1:\n",
" count_bi[word2id[w],word2id[sent[i+1].lower()]] +=1\n",
" \n",
"print(\"unigram done\")\n",
"bigram = np.zeros((vocab_size,vocab_size))\n",
"\n",
"for i in range(vocab_size):\n",
" for j in range(vocab_size):\n",
" if count_bi[i,j] ==0:\n",
" bigram[i,j] = 1.0/ (vocab_size+count_uni[i])\n",
" else:\n",
" bigram[i,j] = (1.0 +count_bi[i,j]) / (vocab_size+count_uni[i])\n",
" \n",
"def checkLM(word1,word2):\n",
" if word1.lower() in word2id and word2.lower() in word2id:\n",
" return bigram[word2id[word1.lower()],word2id[word2.lower()]]\n",
" else:\n",
" return 0.0 \n",
"\n",
"print(checkLM('I','like'))\n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 4.2 构建Channel Probs\n",
"基于``spell_errors.txt``文件构建``channel probability``, 其中$channel[c][s]$表示正确的单词$c$被写错成$s$的概率。 "
]
},
{
"cell_type": "code",
"execution_count": 92,
"metadata": {},
"outputs": [
{
"ename": "NameError",
"evalue": "name 'erros' is not defined",
"output_type": "error",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mNameError\u001b[0m Traceback (most recent call last)",
"\u001b[0;32m<ipython-input-92-52e1d20ce84b>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[1;32m 7\u001b[0m \u001b[0merrors\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0merror\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0msplit\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m','\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 8\u001b[0m \u001b[0merrorProb\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mdict\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 9\u001b[0;31m \u001b[0;32mfor\u001b[0m \u001b[0me\u001b[0m \u001b[0;32min\u001b[0m \u001b[0merros\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 10\u001b[0m \u001b[0merrorProb\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0me\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mstrip\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;36m1.0\u001b[0m \u001b[0;34m/\u001b[0m \u001b[0mlen\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0merrors\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 11\u001b[0m \u001b[0mchannel\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mcorrect\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mstrip\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0merrorProb\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;31mNameError\u001b[0m: name 'erros' is not defined"
]
}
],
"source": [
"# TODO 构建channel probability \n",
"channel = {}\n",
"\n",
"for line in open('spell-errors.txt'):\n",
" # TODO\n",
" (correct,error) = line.strip().split(':')\n",
" errors = error.split(',')\n",
" errorProb = dict()\n",
" for e in erros:\n",
" errorProb[e.strip()] = 1.0 / len(errors)\n",
" channel[correct.strip()] = errorProb\n",
"# TODO\n",
"\n",
"print(channel) "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 4.3 根据错别字生成所有候选集合\n",
"给定一个错误的单词,首先生成跟这个单词距离为1或者2的所有的候选集合。 这部分的代码我们在课程上也讲过,可以参考一下。 "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def filter(words):\n",
" new_words = []\n",
" for w in words:\n",
" if w in word2id:\n",
" new_words.append(w)\n",
" \n",
" return set(new_words)\n",
"def generate_candidates1(word):\n",
" chars = 'abcdefghijklmnopqrstuvwxyz'\n",
" words = set([])\n",
" words = set(word[0:i] + chars[j] + word[i:] for i in range(len(word)) for j in range(len(chars)))\n",
" words = words | set(word[0:i] + chars[j] + word[i+1:] for i in range(len(word)) for j in range(len(chars)))\n",
" words = words | set(word[0:i] + word[i+1:] for i in range(len(word)) for j in range(len(chars)))\n",
" words = words | set(word[0:i-1] + word[i] + word[i-1]+ word[i+1:] for i in range(len(word)))\n",
" words = filter(words)\n",
" if word in words:\n",
" words.remove(word)\n",
" return words\n",
" \n",
" \n",
" \n",
"def generate_candidates(word):\n",
" # 基于拼写错误的单词,生成跟它的编辑距离为1或者2的单词,并通过词典库的过滤。\n",
" # 只留写法上正确的单词。 \n",
" words = generate_candidates1(word)\n",
" words2 = set([])\n",
" if word in words:\n",
" words2 = words2 | set(generate_candidates1(word))\n",
" \n",
" words2 = filter(words2)\n",
" words = words |words2\n",
" return words\n",
"\n",
"words = generate_candidates('strat')\n",
"print(words)\n",
" \n",
" \n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 4.4 给定一个输入,如果有错误需要纠正\n",
"\n",
"给定一个输入``query``, 如果这里有些单词是拼错的,就需要把它纠正过来。这部分的实现可以简单一点: 对于``query``分词,然后把分词后的每一个单词在词库里面搜一下,假设搜不到的话可以认为是拼写错误的! 人如果拼写错误了再通过``channel``和``bigram``来计算最适合的候选。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"import queue as Q\n",
"def word_corrector(word,context):\n",
" word = word.lower()\n",
" candidate = generate_candidates(word)\n",
" if len(candidate) ==0:\n",
" return word\n",
" corrections = Q.PriorityQueue()\n",
" for w in candidate:\n",
" if w in channel and word in channel[w] and w in word2id and context[0].lower() in word2id and context[1].lower() in word:\n",
" probility = np.log(channel[w][word]+0.0001)+np.log(bigram[word2id[context[0].lower()],word2id[w]]) + np.log(bigram[word2id[context[1].lower()],word2id[w]])\n",
" correctors.put((-1 * probility,w))\n",
" if correctors.empty():\n",
" return word\n",
" return correctors.get()[1]\n",
"word = word_corrector('strat',('to','in'))\n",
"print(word)\n",
" \n",
"def spell_corrector(line):\n",
" # 1. 首先做分词,然后把``line``表示成``tokens``\n",
" # 2. 循环每一token, 然后判断是否存在词库里。如果不存在就意味着是拼写错误的,需要修正。 \n",
" # 修正的过程就使用上述提到的``noisy channel model``, 然后从而找出最好的修正之后的结果。 \n",
" new_words = []\n",
" words = ['<s>'] +line.strip().lower().split(' ')+['<s>']\n",
" for i,word in enumerate(words):\n",
" if i == len(words) -1:\n",
" break\n",
" word = word.lower()\n",
" if word not in word2id:\n",
" new_words.append(word_corrector(word,(words[i-1].lower(),words[i+1].lower())))\n",
" else:\n",
" new_words.append(word)\n",
" newline = ' '.join(new_words[1:]) \n",
" return newline # 修正之后的结果,假如用户输入没有问题,那这时候``newline = line``\n",
"\n",
"sentence = spell_corrector('When did Beyonce strat becoming popular')\n",
"print(sentence)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 4.5 基于拼写纠错算法,实现用户输入自动矫正\n",
"首先有了用户的输入``query``, 然后做必要的处理把句子转换成tokens的形状,然后对于每一个token比较是否是valid, 如果不是的话就进行下面的修正过程。 "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"test_query1 = \"When did beyonce strat becoming popular\" # 拼写错误的\n",
"test_query2 = \"What counted for more of the population change\" # 拼写错误的\n",
"\n",
"test_query1 = spell_corector(test_query1)\n",
"test_query2 = spell_corector(test_query2)\n",
"\n",
"print(test_query1)\n",
"print(test_query2)\n",
"#print (get_top_results_tfidf(test_query1))\n",
"#print (get_top_results_w2v(test_query1))\n",
"#print (get_top_results_bert(test_query1))\n",
"\n",
"#print (get_top_results_tfidf(test_query2))\n",
"#print (get_top_results_w2v(test_query2))\n",
"#print (get_top_results_bert(test_query2))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 附录 \n",
"在本次项目中我们实现了一个简易的问答系统。基于这个项目,我们其实可以有很多方面的延伸。\n",
"- 在这里,我们使用文本向量之间的余弦相似度作为了一个标准。但实际上,我们也可以基于基于包含关键词的情况来给一定的权重。比如一个单词跟related word有多相似,越相似就意味着相似度更高,权重也会更大。 \n",
"- 另外 ,除了根据词向量去寻找``related words``也可以提前定义好同义词库,但这个需要大量的人力成本。 \n",
"- 在这里,我们直接返回了问题的答案。 但在理想情况下,我们还是希望通过问题的种类来返回最合适的答案。 比如一个用户问:“明天北京的天气是多少?”, 那这个问题的答案其实是一个具体的温度(其实也叫做实体),所以需要在答案的基础上做进一步的抽取。这项技术其实是跟信息抽取相关的。 \n",
"- 对于词向量,我们只是使用了``average pooling``, 除了average pooling,我们也还有其他的经典的方法直接去学出一个句子的向量。\n",
"- 短文的相似度分析一直是业界和学术界一个具有挑战性的问题。在这里我们使用尽可能多的同义词来提升系统的性能。但除了这种简单的方法,可以尝试其他的方法比如WMD,或者适当结合parsing相关的知识点。 "
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": true
},
"source": [
"好了,祝你好运! "
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment