{ "cells": [ { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "## 搭建一个简单的问答系统 (Building a Simple QA System)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "本次项目的目标是搭建一个基于检索式的简易的问答系统,这是一个最经典的方法也是最有效的方法。 \n", "\n", "```不要单独创建一个文件,所有的都在这里面编写,不要试图改已经有的函数名字 (但可以根据需求自己定义新的函数)```\n", "\n", "```预估完成时间```: 5-10小时" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 检索式的问答系统\n", "问答系统所需要的数据已经提供,对于每一个问题都可以找得到相应的答案,所以可以理解为每一个样本数据是 ``<问题、答案>``。 那系统的核心是当用户输入一个问题的时候,首先要找到跟这个问题最相近的已经存储在库里的问题,然后直接返回相应的答案即可(但实际上也可以抽取其中的实体或者关键词)。 举一个简单的例子:\n", "\n", "假设我们的库里面已有存在以下几个<问题,答案>:\n", "- <\"贪心学院主要做什么方面的业务?”, “他们主要做人工智能方面的教育”>\n", "- <“国内有哪些做人工智能教育的公司?”, “贪心学院”>\n", "- <\"人工智能和机器学习的关系什么?\", \"其实机器学习是人工智能的一个范畴,很多人工智能的应用要基于机器学习的技术\">\n", "- <\"人工智能最核心的语言是什么?\", ”Python“>\n", "- .....\n", "\n", "假设一个用户往系统中输入了问题 “贪心学院是做什么的?”, 那这时候系统先去匹配最相近的“已经存在库里的”问题。 那在这里很显然是 “贪心学院是做什么的”和“贪心学院主要做什么方面的业务?”是最相近的。 所以当我们定位到这个问题之后,直接返回它的答案 “他们主要做人工智能方面的教育”就可以了。 所以这里的核心问题可以归结为计算两个问句(query)之间的相似度。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 项目中涉及到的任务描述\n", "问答系统看似简单,但其中涉及到的内容比较多。 在这里先做一个简单的解释,总体来讲,我们即将要搭建的模块包括:\n", "\n", "- 文本的读取: 需要从相应的文件里读取```(问题,答案)```\n", "- 文本预处理: 清洗文本很重要,需要涉及到```停用词过滤```等工作\n", "- 文本的表示: 如果表示一个句子是非常核心的问题,这里会涉及到```tf-idf```, ```Glove```以及```BERT Embedding```\n", "- 文本相似度匹配: 在基于检索式系统中一个核心的部分是计算文本之间的```相似度```,从而选择相似度最高的问题然后返回这些问题的答案\n", "- 倒排表: 为了加速搜索速度,我们需要设计```倒排表```来存储每一个词与出现的文本\n", "- 词义匹配:直接使用倒排表会忽略到一些意思上相近但不完全一样的单词,我们需要做这部分的处理。我们需要提前构建好```相似的单词```然后搜索阶段使用\n", "- 拼写纠错:我们不能保证用户输入的准确,所以第一步需要做用户输入检查,如果发现用户拼错了,我们需要及时在后台改正,然后按照修改后的在库里面搜索\n", "- 文档的排序: 最后返回结果的排序根据文档之间```余弦相似度```有关,同时也跟倒排表中匹配的单词有关\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 项目中需要的数据:\n", "1. ```dev-v2.0.json```: 这个数据包含了问题和答案的pair, 但是以JSON格式存在,需要编写parser来提取出里面的问题和答案。 \n", "2. ```glove.6B```: 这个文件需要从网上下载,下载地址为:https://nlp.stanford.edu/projects/glove/, 请使用d=200的词向量\n", "3. ```spell-errors.txt``` 这个文件主要用来编写拼写纠错模块。 文件中第一列为正确的单词,之后列出来的单词都是常见的错误写法。 但这里需要注意的一点是我们没有给出他们之间的概率,也就是p(错误|正确),所以我们可以认为每一种类型的错误都是```同等概率```\n", "4. ```vocab.txt``` 这里列了几万个英文常见的单词,可以用这个词库来验证是否有些单词被拼错\n", "5. ```testdata.txt``` 这里搜集了一些测试数据,可以用来测试自己的spell corrector。这个文件只是用来测试自己的程序。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "在本次项目中,你将会用到以下几个工具:\n", "- ```sklearn```。具体安装请见:http://scikit-learn.org/stable/install.html sklearn包含了各类机器学习算法和数据处理工具,包括本项目需要使用的词袋模型,均可以在sklearn工具包中找得到。 \n", "- ```jieba```,用来做分词。具体使用方法请见 https://github.com/fxsjy/jieba\n", "- ```bert embedding```: https://github.com/imgarylai/bert-embedding\n", "- ```nltk```:https://www.nltk.org/index.html" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 第一部分:对于训练数据的处理:读取文件和预处理" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "- ```文本的读取```: 需要从文本中读取数据,此处需要读取的文件是```dev-v2.0.json```,并把读取的文件存入一个列表里(list)\n", "- ```文本预处理```: 对于问题本身需要做一些停用词过滤等文本方面的处理\n", "- ```可视化分析```: 对于给定的样本数据,做一些可视化分析来更好地理解数据" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 1.1节: 文本的读取\n", "把给定的文本数据读入到```qlist```和```alist```当中,这两个分别是列表,其中```qlist```是问题的列表,```alist```是对应的答案列表" ] }, { "cell_type": "code", "execution_count": 62, "metadata": {}, "outputs": [], "source": [ "import json\n", "import matplotlib.pyplot as plt\n", "def read_corpus():\n", " \"\"\"\n", " 读取给定的语料库,并把问题列表和答案列表分别写入到 qlist, alist 里面。 在此过程中,不用对字符换做任何的处理(这部分需要在 Part 2.3里处理)\n", " qlist = [\"问题1\", “问题2”, “问题3” ....]\n", " alist = [\"答案1\", \"答案2\", \"答案3\" ....]\n", " 务必要让每一个问题和答案对应起来(下标位置一致)\n", " \"\"\"\n", " # TODO 需要完成的代码部分 ...\n", " filename=\"./train-v2.0.json\"\n", " qlist=[]\n", " alist=[]\n", " with open(filename) as f:\n", " data_array = json.load(f)['data']\n", " for data in data_array:\n", " paragraphs = data['paragraphs']\n", " for paragraph in paragraphs:\n", " qas = paragraph['qas']\n", " for qa in qas:\n", " for qa in qas:\n", " if 'plausible_answers' in qa:\n", " qlist.append(qa['question'])\n", " alist.append(qa['plausible_answers'][0]['text'])\n", " else:\n", " qlist.append(qa['question'])\n", " alist.append(qa['answers'][0]['text'])\n", "\n", "\n", " with open('a_list.json','w') as file:\n", " json.dump(alist,file)\n", " assert len(qlist) == len(alist) # 确保长度一样\n", " return qlist, alist\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 1.2 理解数据(可视化分析/统计信息)\n", "对数据的理解是任何AI工作的第一步, 需要对数据有个比较直观的认识。在这里,简单地统计一下:\n", "\n", "- 在```qlist```出现的总单词个数\n", "- 按照词频画一个```histogram``` plot" ] }, { "cell_type": "code", "execution_count": 63, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "10574217 76474\n" ] } ], "source": [ "# TODO: 统计一下在qlist中总共出现了多少个单词? 总共出现了多少个不同的单词(unique word)?\n", "# 这里需要做简单的分词,对于英文我们根据空格来分词即可,其他过滤暂不考虑(只需分词)\n", "def count_word(qlist):\n", " qlist_dic={}\n", " word_total=0\n", "\n", " for line in qlist:\n", " words=line.strip().split()\n", " for item in words:\n", " word_total+=1\n", " if item in qlist_dic:\n", " qlist_dic[item]+=1\n", " else:\n", " qlist_dic[item]=1\n", " return qlist_dic,word_total\n", "qlist,alist=read_corpus()\n", "qlist_dic,word_total=count_word(qlist)\n", "print (word_total,len(qlist_dic.keys()))\n", " " ] }, { "cell_type": "code", "execution_count": 64, "metadata": {}, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYMAAAD7CAYAAACIYvgKAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjMsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+AADFEAAAXgElEQVR4nO3df4xd5X3n8feXMQabYDBgYscmsV0sElhtGpiCWVZJGgIYmgKpQHKwipWlcpuy3WR3pRYr0qKFIpVV1LBoqYs30HVaNoRSZ7GydL0uP7RKtBDMj0CAUE9swLP4xxAbgpI6/Mh3/zjPMPeO753xzJ3x3PF5v6Sre85znnPO9869M5+55znn3shMJEn1dtRUFyBJmnqGgSTJMJAkGQaSJAwDSRKGgSSJQwiDiLg7IvZGxI8a2k6KiC0Rsa3czy3tERG3R0RfRDwbEWc3rLO69N8WEasb2s+JiOfKOrdHREz0g5QkjexQ3hn8N2DFsLYbgIcycxnwUJkHuBRYVm5rgHVQhQdwI3AecC5w42CAlD5rGtYbvi9J0iSbMVqHzPw/EbF4WPMVwKfL9AbgUeBPSvs3s7qS7bGIODEiFpS+WzJzH0BEbAFWRMSjwJzM/L+l/ZvAlcDfj1bXKaeckosXDy9LktTOk08++Xpmzmu1bNQwaOODmbkLIDN3RcSppX0hsLOhX39pG6m9v0X7qBYvXszWrVvHV70k1VBEvNJu2UQPILc63p/jaG+98Yg1EbE1IrYODAyMs0RJ0nDjDYM95fAP5X5vae8HTmvotwh4bZT2RS3aW8rM9ZnZm5m98+a1fKcjSRqH8YbBJmDwjKDVwAMN7deWs4qWA2+Ww0mbgYsjYm4ZOL4Y2FyWvRURy8tZRNc2bEuSdJiMOmYQEd+iGgA+JSL6qc4K+jPgvoi4DngVuLp0fxC4DOgDfgF8ESAz90XEzcATpd9Ng4PJwJeozliaRTVwPOrgsSRpYsV0/Qjr3t7edABZkg5dRDyZmb2tltXvCuRdu+BTn4Ldu6e6EknqGvULg5tvhu99D266aaorkaSuUZ8wmDULImDdOvjVr6r7iKpdkmquPmGwfTtcc83Q/OzZsGoV7NgxdTVJUpeoTxgsWABz5lTTPT1w4EA1P3/+1NYlSV1gvB9HMT3t2VPdf/7zcOqp1WCyJKlmYbBxYzVO8P3vw1NP+a5Akor6HCZqtGuXZxNJUoP6hMHg2USDPJtIkt5XnzDwbCJJaqs+YeDZRJLUVr0GkD2bSJJaqlcYDJ5NdPLJcMcdU12NJHWN+hwmkiS1ZRhIkgwDSZJhIEnCMJAkUdcwmKZf9SlJk6WeYSBJamIYSJIMA0mSYSBJwjCQJGEYSJIwDCRJ1DUMvM5AkprUMwwkSU0MA0mSYSBJMgwkSRgGkiQ6DIOI+LcR8XxE/CgivhURx0bEkoh4PCK2RcS3I2Jm6XtMme8ryxc3bGdtaX8pIi7p7CFJksZq3GEQEQuBfwP0ZuY/A3qAlcCtwNczcxmwH7iurHIdsD8zTwe+XvoREWeW9c4CVgB/ERE9463rkHhqqSQ16fQw0QxgVkTMAGYDu4DPAPeX5RuAK8v0FWWesvzCiIjSfm9m/jIzdwB9wLkd1iVJGoNxh0Fm/j/ga8CrVCHwJvAk8EZmvlu69QMLy/RCYGdZ993S/+TG9hbrSJIOg04OE82l+q9+CfAh4Djg0hZdB4/JRJtl7dpb7XNNRGyNiK0DAwNjL1qS1FInh4k+C+zIzIHMfAfYCPwL4MRy2AhgEfBame4HTgMoy08A9jW2t1inSWauz8zezOydN29eB6VLkhp1EgavAssjYnY59n8h8ALwCHBV6bMaeKBMbyrzlOUPZ2aW9pXlbKMlwDLgBx3UJUkaoxmjd2ktMx+PiPuBp4B3gaeB9cD/BO6NiD8tbXeVVe4C/joi+qjeEaws23k+Iu6jCpJ3gesz873x1iVJGrtxhwFAZt4I3DiseTstzgbKzAPA1W22cwtwSye1SJLGr55XIHudgSQ1qWcYSJKaGAaSJMNAkmQYSJIwDCRJGAaSJOoaBp5aKklN6hkGkqQmhoEkyTCQJBkGkiQMA0kShoEkCcNAkkRdw8DrDCSpST3DQJLUxDCQJBkGkiTDQJKEYSBJwjCQJGEYSJKoaxh4nYEkNalnGEiSmhgGkiTDQJJkGEiSMAwkSRgGkiTqGgaeWipJTeoZBpKkJoaBJKmzMIiIEyPi/oj4cUS8GBHnR8RJEbElIraV+7mlb0TE7RHRFxHPRsTZDdtZXfpvi4jVnT4oSdLYdPrO4D8D/yszPwp8HHgRuAF4KDOXAQ+VeYBLgWXltgZYBxARJwE3AucB5wI3DgaIJOnwGHcYRMQc4JPAXQCZ+XZmvgFcAWwo3TYAV5bpK4BvZuUx4MSIWABcAmzJzH2ZuR/YAqwYb12SpLHr5J3BUmAA+KuIeDoivhERxwEfzMxdAOX+1NJ/IbCzYf3+0tau/SARsSYitkbE1oGBgQ5KlyQ16iQMZgBnA+sy8xPAzxk6JNRKtGjLEdoPbsxcn5m9mdk7b968sdYrSWqjkzDoB/oz8/Eyfz9VOOwph38o93sb+p/WsP4i4LUR2ieP1xlIUpNxh0Fm7gZ2RsQZpelC4AVgEzB4RtBq4IEyvQm4tpxVtBx4sxxG2gxcHBFzy8DxxaVNknSYzOhw/T8C7omImcB24ItUAXNfRFwHvApcXfo+CFwG9AG/KH3JzH0RcTPwROl3U2bu67AuSdIYdBQGmfkM0Nti0YUt+iZwfZvt3A3c3UktkqTx8wpkSZJhIEkyDCRJ1DUMPLVUkprUMwwkSU0MA0lSTcNg82bYvXuqq5CkrlHPMNi7F266aaqrkKSuUZ8wmDULouEz8datq+ZnzZq6miSpS9QnDLZvh2uuGZqfPRtWrYIdO6auJknqEvUJgwULYM6carqnBw4cqObnz5/auiSpC3T6QXXTy5491f2KFfCRj8CuXVNbjyR1iXqFwcaN1TjB3Llwxx1TXY0kdY36HCaSJLVlGEiSDANJkmEgScIwkCRhGEiSqGsY+H0GktSknmEgSWpiGEiSDANJkmEgScIwkCRhGEiSqGsYeGqpJDWpZxhIkpoYBpIkw0CSZBhIkjAMJElMQBhERE9EPB0R3y3zSyLi8YjYFhHfjoiZpf2YMt9Xli9u2Mba0v5SRFzSaU2SpLGZiHcGXwZebJi/Ffh6Zi4D9gPXlfbrgP2ZeTrw9dKPiDgTWAmcBawA/iIieiagLknSIeooDCJiEfBbwDfKfACfAe4vXTYAV5bpK8o8ZfmFpf8VwL2Z+cvM3AH0Aed2UteovM5Akpp0+s7gNuCPgV+V+ZOBNzLz3TLfDyws0wuBnQBl+Zul//vtLdaZHP/wD7B796TuQpKmk3GHQUR8DtibmU82NrfomqMsG2md4ftcExFbI2LrwMDAmOptMjAAN900/vUl6QjTyTuDC4DLI+Jl4F6qw0O3ASdGxIzSZxHwWpnuB04DKMtPAPY1trdYp0lmrs/M3szsnTdv3tiqnTULoiF31q2r5mfNGtt2JOkINO4wyMy1mbkoMxdTDQA/nJmrgEeAq0q31cADZXpTmacsfzgzs7SvLGcbLQGWAT8Yb11tbd8O11wzND97NqxaBTt2TPiuJGm6mTF6lzH7E+DeiPhT4GngrtJ+F/DXEdFH9Y5gJUBmPh8R9wEvAO8C12fmexNe1YIFMGdONX3UUXDgQDU/f/6E70qSppsJCYPMfBR4tExvp8XZQJl5ALi6zfq3ALdMRC0j2rOnuj/hBLj8cgeRJamo1xXIGzdW9/v3V4eJBuclqebqEwYOIEtSW/UJAweQJamt+oSBA8iS1NZknE3UvQYHkC+6CH7t12DXrqmtR5K6RL3CYOPGapxg7ly4446prkaSukZ9DhNJktoyDCRJNQ0DP8JakprUMwweftirjyWpQT3DwI+wlqQm9QkDr0CWpLbqEwZegSxJbdUnDLwCWZLaqtdFZ4NXIH/2s3D66V6BLElFvcLAK5AlqaX6HCZq5HUGktSknmEgSWpiGEiSDANJkmEgScIwkCRhGEiSqGsYeGqpJDWpZxhIkpoYBpIkw0CSZBhIkjAMJEkYBpIkDANJEnUNA68zkKQm4w6DiDgtIh6JiBcj4vmI+HJpPykitkTEtnI/t7RHRNweEX0R8WxEnN2wrdWl/7aIWN35wxrFo4/C7t2TvhtJmi46eWfwLvDvM/NjwHLg+og4E7gBeCgzlwEPlXmAS4Fl5bYGWAdVeAA3AucB5wI3DgbIpHn9dbjhhtH7SVJNjDsMMnNXZj5Vpt8CXgQWAlcAG0q3DcCVZfoK4JtZeQw4MSIWAJcAWzJzX2buB7YAK8ZbV1uzZlVfeTlow4ZqftasCd+VJE03EzJmEBGLgU8AjwMfzMxdUAUGcGrpthDY2bBaf2lr1z6x2o0TOH4gSZ2HQUR8APg74CuZ+bORurZoyxHaW+1rTURsjYitAwMDYyt0xw44/fTmtmXL4OWXx7YdSToCdRQGEXE0VRDck5kbS/OecviHcr+3tPcDpzWsvgh4bYT2g2Tm+szszczeefPmja3YpUuhr6+5bds2WLJkbNuRpCNQJ2cTBXAX8GJm/nnDok3A4BlBq4EHGtqvLWcVLQfeLIeRNgMXR8TcMnB8cWmbWNu3w6JFQ/M9PdX8jh0TvitJmm46eWdwAfC7wGci4plyuwz4M+CiiNgGXFTmAR4EtgN9wH8F/hAgM/cBNwNPlNtNpW1iLVgAn/tcNX3UUdVYwW//NsyfP+G7kqTpZsZ4V8zM79H6eD/AhS36J3B9m23dDdw93loO2Z491f2cOXDllV5rIElFva5A3liGNd54ozqldHBekmquPmEw/DqDdeu8zkCSivqEwfbtcM01Q/OzZ8OqVQ4gSxJ1CoMFC6qxAqgGkA8cqOYdQJak8Q8gT0uDA8i/+Ztwxhmwa9fU1iNJXaJeYbBxYzVO8Mwz8Dd/47sCSSrqc5io0U9/6qeWSlKD+oSBn1oqSW3VJwz81FJJaqs+YSBJaqs+YbBjR3VtQaPjjvMjrCWJOoXB0qXwi180t/38536EtSRRpzDwI6wlqa36hIEfYS1JbdXrorPBK5A//Wn46Ee9AlmSinqFweAVyD/8Idxzj+8KJKmoz2GiRl6BLElN6hMGXoEsSW3VJwy8AlmS2qpPGEiS2qpPGPzyl2Nrl6QaqU8YHH302NolqUbqEwavvNI8gAzV/KuvTk09ktRF6hMGS5cePFic6WcTSRJ1CoPhn00EfjaRJBX1uQL5Qx86uK2/v/rMIk8vlVRz9XlnMHPm2NolqUbqEwZvvz22dkmqkfqEgSSpLcNAkmQYSJIMA0kSXRQGEbEiIl6KiL6I8MsGJOkw6oowiIge4A7gUuBM4AsRceZhLKD5tnQp3HprNX3MMdV3HvT0wPr1MHt29R3Ky5ZV08cfD/fdB3PmQG9v9S1qn/pUdb98OZx//lDb7t3VV222mh40vK1Vn0ajbW8ytNvnZNVyuB7XVJiOj2061nykmMyffWZO+Q04H9jcML8WWDvSOuecc06OSXVpWWe3o45q3T5z5tD0WWdV/c466+C2L32purWaHjS8rVWfRqNtbzK02+dk1XK4HtdUmI6PbTrWfKTo8GcPbM02f1Mju+Dq24i4CliRmb9X5n8XOC8z/3W7dXp7e3Pr1q1j2UmnZXaHnh74/d+HO++E9947tL4T5VD2OZG1tNvfRD+uqTAdH9t0rPlI0e5nf+yx8E//dMibiYgnM7O35bIuCYOrgUuGhcG5mflHw/qtAdYAfPjDHz7nlVdeGctOJqzecRusIbN5+uij4WMfq9peeAHeeQdmzKgOPb31VjV/9NHw8Y/D7/xO1f7mm/Cd71SHoN555+DtNfadKCPtc6THNt5ahu9vsh7XVJiOj2061nykGP6znz0bPv95+NrXYP78Q97MSGHQLZ9N1A+c1jC/CHhteKfMXA+sh+qdwZj20PhHair09Awl+7HHwoEDQ9Nvvw0XXFDV+NxzQ23z58MbbwzN/8ZvwNq1Q9vcuROeeqr19ob3nSjt9tluutNaGvc3mY9rKkzHxzYdaz5SDP/dmzNnTEEwmq4YQAaeAJZFxJKImAmsBDZNcU1jM3NmldY9PXDKKdXA8sKF1e3446tBnyVLqttjjzVP/8EfVANCe/ZU04Nt+/c3zw8fNGrs32p7k6HdPkd7bBOxv8l8XFNhOj626VjzkWKSf/ZdcZgIICIuA24DeoC7M/OWkfqPecxAkmpuOhwmIjMfBB6c6jokqY665TCRJGkKGQaSJMNAkmQYSJIwDCRJdNGppWMVEQPAGC5BbnIK8PoEljNZrHPiTZdarXNiTZc6YXJr/Uhmzmu1YNqGQSciYmu7c227iXVOvOlSq3VOrOlSJ0xdrR4mkiQZBpKk+obB+qku4BBZ58SbLrVa58SaLnXCFNVayzEDSVKzur4zkCQ1avcVaEfiDVgBvAT0ATdM4n7uBvYCP2poOwnYAmwr93NLewC3l5qeBc5uWGd16b8NWN3Qfg7wXFnndobe4bXcxwh1ngY8ArwIPA98uYtrPRb4AfDDUut/LO1LgMfLdr4NzCztx5T5vrJ8ccO21pb2l6i+VGnE10e7fYxSbw/wNPDdbq0TeLk8N89Qvg6xS5/7E4H7gR9TvVbP79I6zyg/y8Hbz4CvdGOtLeufqD+A3X6j+uX8CbAUmEn1R+XMSdrXJ4GzaQ6D/zT4iwvcANxapi8D/r68MJYDjzc8udvL/dwyPfgi+kH5hYiy7qUj7WOEOhcMvgCB44F/BM7s0loD+ECZPprqj95y4D5gZWn/S+BLZfoPgb8s0yuBb5fpM8tzfwzVH8+flNdG29dHu32MUu+/A/47Q2HQdXVShcEpw9q68bnfAPxemZ5JFQ5dV2eLvze7gY90e63v1zyRfwS7+VZ+gJsb5tcCaydxf4tpDoOXgAVlegHwUpm+E/jC8H7AF4A7G9rvLG0LgB83tL/fr90+xlDzA8BF3V4rMBt4CjiP6uKcGcOfY2AzcH6ZnlH6xfDnfbBfu9dHWaflPkaobxHwEPAZ4LsjbWOK63yZg8Ogq557YA6wg/IfcLfW2aLui4HvT4daB291GjNYCOxsmO8vbYfLBzNzF0C5P3WUukZq72/RPtI+RhURi4FPUP3H3ZW1RkRPRDxDdQhuC9V/yG9k5rsttv9+TWX5m8DJ43gMJ4+wj3ZuA/4Y+FWZH2kbU1lnAv87Ip4s3y8O3ffcLwUGgL+KiKcj4hsRcVwX1jncSuBbo2ynW2oF6jWA3OoLkPOwV3GwdnWNtX38BUR8APg74CuZ+bORuo6xpgmtNTPfy8xfp/rP+1zgYyNsf6JqHdNjiIjPAXsz88nG5m6rs7ggM88GLgWuj4hPjtB3qp77GVSHXNdl5ieAn1MdBmmnG36fZgKXA387Wtcx1jSpf8PqFAb9VAOmgxYBrx3G/e+JiAUA5X7vKHWN1L6oRftI+2grIo6mCoJ7MnNjN9c6KDPfAB6lOs56YkQMfmNf4/bfr6ksPwHYN47H8PoI+2jlAuDyiHgZuJfqUNFtXVgnmflaud8LfIcqYLvtue8H+jPz8TJ/P1U4dFudjS4FnsrMPaNspxtqfV+dwuAJYFlELCnJvRLYdBj3v4nqDAHK/QMN7ddGZTnwZnmbtxm4OCLmRsRcqmOQm8uytyJieUQEcO2wbbXaR0tl/buAFzPzz7u81nkRcWKZngV8lurMkkeAq9rUOrj9q4CHszqguglYGRHHRMQSYBnVoFzL10dZp90+DpKZazNzUWYuLtt4ODNXdVudEXFcRBw/OE31nP2ILnvuM3M3sDMizihNFwIvdFudw3yBoUNEI22nG2odMtZBhul8oxq9/0eqY81fncT9fAvYBbxDlebXUR3TfYjq1K+HgJNK3wDuKDU9B/Q2bOdfUZ1C1gd8saG9l+oX9yfAf2Ho9LKW+xihzn9J9TbzWYZOh7usS2v951Snaj5btvcfSvtSqj+SfVRvy48p7ceW+b6yfGnDtr5a6nmJcjbGSK+Pdvs4hNfBpxk6m6ir6ix9f8jQqbpfHel5meLn/teBreW5/x9UZ9h0XZ1lndnAT4ETGtq6stbhN69AliTV6jCRJKkNw0CSZBhIkgwDSRKGgSQJw0CShGEgScIwkCQB/x+aHXrU1dWJIQAAAABJRU5ErkJggg==\n", "text/plain": [ "<Figure size 432x288 with 1 Axes>" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "# TODO: 统计一下qlist中出现1次,2次,3次... 出现的单词个数, 然后画一个plot. 这里的x轴是单词出现的次数(1,2,3,..), y轴是单词个数。\n", "# 从左到右分别是 出现1次的单词数,出现2次的单词数,出现3次的单词数... \n", "from collections import Counter\n", "def plot_table(qlist_dic):\n", " words_freq=qlist_dic.values()\n", " words_freq_dic=Counter(words_freq) #此时字典中的键就是词频,值就是具有相同词频的单词数量 \n", " x=sorted(words_freq_dic.keys()) #排序,确保词频,即X轴上递增\n", " y=[]\n", " #找到每个词频对应的单词数量\n", " for i in x:\n", " y.append(words_freq_dic[i])\n", " \n", " plt.plot(x, y, color=\"r\", linestyle=\"-\", marker=\"*\", linewidth=1.0)\n", " plt.show()\n", "plot_table(qlist_dic) \n" ] }, { "cell_type": "code", "execution_count": 65, "metadata": {}, "outputs": [], "source": [ "# TODO: 从上面的图中能观察到什么样的现象? 这样的一个图的形状跟一个非常著名的函数形状很类似,能所出此定理吗? \n", "# answer: [Zip]'s law吧,就是低频的词占了绝大多数,高频词其实很少\n", "# 就是不知道为啥叫Zip \n", "# " ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "#### 1.3 文本预处理\n", "此部分需要做文本方面的处理。 以下是可以用到的一些方法:\n", "\n", "- 1. 停用词过滤 (去网上搜一下 \"english stop words list\",会出现很多包含停用词库的网页,或者直接使用NLTK自带的) \n", "- 2. 转换成lower_case: 这是一个基本的操作 \n", "- 3. 去掉一些无用的符号: 比如连续的感叹号!!!, 或者一些奇怪的单词。\n", "- 4. 去掉出现频率很低的词:比如出现次数少于10,20.... (想一下如何选择阈值)\n", "- 5. 对于数字的处理: 分词完只有有些单词可能就是数字比如44,415,把所有这些数字都看成是一个单词,这个新的单词我们可以定义为 \"#number\"\n", "- 6. lemmazation: 在这里不要使用stemming, 因为stemming的结果有可能不是valid word。\n" ] }, { "cell_type": "code", "execution_count": 81, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "beyonce start becoming popular\n" ] } ], "source": [ "# TODO: 需要做文本方面的处理。 从上述几个常用的方法中选择合适的方法给qlist做预处理(不一定要按照上面的顺序,不一定要全部使用)\n", "from nltk.corpus import stopwords\n", "import string\n", "#利用上面得到的有序词频列表x,选取百分比来确定低频值\n", "def get_lowfreq_list(qlist_dic,percent):\n", " words_freq=qlist_dic.values()\n", " words_freq_dic=Counter(words_freq) #此时字典中的键就是词频,值就是具有相同词频的单词数量 \n", " x=sorted(words_freq_dic.keys())\n", " \n", " index=float(percent)*len(x)\n", " index=int(index)+1#避免取0值\n", " low_freq=x[index]\n", " low_freq_list=[]\n", " for i,j in qlist_dic.items():\n", " if j<low_freq:\n", " low_freq_list.append(i)\n", " return low_freq_list\n", "#不替换数字,也不去除低频词\n", "def text_preprocess(input_list):\n", " #low_freq_list=get_lowfreq_list(qlist_dic,percent)\n", " stop_words = stopwords.words('english')\n", " new_list=[]\n", " for line in input_list:\n", " line=line.lower()\n", " line=''.join(c for c in line if c not in string.punctuation)#去除无用符号\n", " split_line=line.strip().split()\n", " #print(words)\n", " new_line=[word for word in split_line if word not in stop_words]\n", " filtered_line=' '.join(new_line)\n", " new_list.append(filtered_line)\n", " print(new_list[0])\n", " with open(\"qlist.json\",'w')as file:\n", " json.dump(new_list,file)\n", " return new_list\n", "\n", "qlist,alist=read_corpus()\n", "new_list= text_preprocess(qlist) # 更新后的问题列表" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 第二部分: 文本的表示\n", "当我们做完必要的文本处理之后就需要想办法表示文本了,这里有几种方式\n", "\n", "- 1. 使用```tf-idf vector```\n", "- 2. 使用embedding技术如```word2vec```, ```bert embedding```等\n", "\n", "下面我们分别提取这三个特征来做对比。 " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 2.1 使用tf-idf表示向量\n", "把```qlist```中的每一个问题的字符串转换成```tf-idf```向量, 转换之后的结果存储在```X```矩阵里。 ``X``的大小是: ``N* D``的矩阵。 这里``N``是问题的个数(样本个数),\n", "``D``是词典库的大小" ] }, { "cell_type": "code", "execution_count": 82, "metadata": {}, "outputs": [], "source": [ "# TODO \n", "from sklearn.feature_extraction.text import TfidfVectorizer\n", "def Tfidf_vec(qlist):\n", " vectorizer = TfidfVectorizer()# 定义一个tf-idf的vectorizer\n", " X = vectorizer.fit_transform(qlist) \n", " # 结果存放在X矩阵里\n", " return X\n", "\n", "\n", "#print(Tfidf_vec(qlist)[0])\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 2.2 使用wordvec + average pooling\n", "词向量方面需要下载: https://nlp.stanford.edu/projects/glove/ (请下载``glove.6B.zip``),并使用``d=200``的词向量(200维)。国外网址如果很慢,可以在百度上搜索国内服务器上的。 每个词向量获取完之后,即可以得到一个句子的向量。 我们通过``average pooling``来实现句子的向量。 " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# TODO 基于Glove向量获取句子向量\n", "def get_glove_vec(qlist):\n", " import numpy as np\n", " with open(\"glove.6B.200d.txt\",\"r\",encoding=\"utf-8\") as glovefile:\n", " word2vec={}\n", " for i,each_line in enumerate(glovefile):\n", " each_vec=each_line.strip().split(' ')\n", " word2vec[each_vec[0]]=[float(i) for i in each_vec[1:]]\n", " with open(\"glove.json\",'w')as file:\n", " json.dump(word2vec,file)\n", " \n", " print(word2vec['of'])\n", " X_w2v=[] \n", " for line in qlist:\n", " words=line.strip().split()\n", " line_vec=[]\n", " count_word=0\n", " for word in words:\n", " if word in word2vec: \n", " count_word+=1\n", " line_vec.append(word2vec[word])\n", " X_w2v.append(np.sum(np.array(line_vec),axis=0) /count_word) \n", " #print(X_w2v)\n", " # 初始化完emb之后就可以对每一个句子来构建句子向量了,这个过程使用average pooling来实现\n", " X_w2v=np.asarray(X_w2v)\n", " return X_w2v\n", "get_glove_vec(qlist)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 2.3 使用BERT + average pooling\n", "最近流行的BERT也可以用来学出上下文相关的词向量(contex-aware embedding), 在很多问题上得到了比较好的结果。在这里,我们不做任何的训练,而是直接使用已经训练好的BERT embedding。 具体如何训练BERT将在之后章节里体会到。 为了获取BERT-embedding,可以直接下载已经训练好的模型从而获得每一个单词的向量。可以从这里获取: https://github.com/imgarylai/bert-embedding , 请使用```bert_12_768_12```\t当然,你也可以从其他source获取也没问题,只要是合理的词向量。 " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# TODO 基于BERT的句子向量计算\n", "!pip install bert_embedding\n", "def get_bert_vec(qlist):\n", " import numpy as np\n", " from bert_embedding import BertEmbedding\n", " bert_embedding = BertEmbedding(model='bert_12_768_12', dataset_name='book_corpus_wiki_en_cased')\n", " bert=[]\n", " for line in qlist:\n", " result = bert_embedding([line])\n", " item=result[0]\n", " if len(item[0])>0:\n", " bert.append(np.sum(np.array(item[1]),axis=0)/len(item[0]))\n", " X_bert =np.asarray(bert) # 每一个句子的向量结果存放在X_bert矩阵里。行数为句子的总个数,列数为一个句子embedding大小。\n", " print(X_bert[0])\n", " return X_bert\n", "get_bert_vec(qlist)" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "### 第三部分: 相似度匹配以及搜索\n", "在这部分里,我们需要把用户每一个输入跟知识库里的每一个问题做一个相似度计算,从而得出最相似的问题。但对于这个问题,时间复杂度其实很高,所以我们需要结合倒排表来获取相似度最高的问题,从而获得答案。" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 3.1 tf-idf + 余弦相似度\n", "我们可以直接基于计算出来的``tf-idf``向量,计算用户最新问题与库中存储的问题之间的相似度,从而选择相似度最高的问题的答案。这个方法的复杂度为``O(N)``, ``N``是库中问题的个数。" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import heapq as hp\n", "from sklearn.metrics.pairwise import cosine_similarity\n", "#处理查询句子\n", "def movestopwords(sentence):\n", " from nltk.corpus import stopwords\n", " # 英文的大写全部改成小写\n", " q_list_lower = sentence.lower() \n", " # 把问题拆解成一个一个单词\n", " word_tokens =q_list_lower.strip().split() \n", " # 过滤掉停用词\n", " ## 增加stopwords\n", " stop_words = set(stopwords.words('english'))\n", " stop_words.update(['.', ',', '\"', \"'\", '?', '!', ':', ';', '(', ')', '[', ']', '{', '}'])\n", " filtered_words = [word for word in word_tokens if word not in stop_words]\n", " sent=\" \".join(filtered_words)\n", " return sent\n", "def get_top_results_tfidf_noindex(query):\n", " # TODO 需要编写\n", " \"\"\"\n", " 给定用户输入的问题 query, 返回最有可能的TOP 5问题。这里面需要做到以下几点:\n", " 1. 对于用户的输入 query 首先做一系列的预处理(上面提到的方法),然后再转换成tf-idf向量(利用上面的vectorizer)\n", " 2. 计算跟每个库里的问题之间的相似度\n", " 3. 找出相似度最高的top5问题的答案\n", " \"\"\"\n", " import heapq \n", " with open('qlist.json','r')as file:\n", " q_list=file.read()\n", " qlist=json.loads(q_list)\n", " vectorizer = TfidfVectorizer()\n", " qlist_vec=vectorizer.fit_transform(qlist)\n", " #print(qlist_vec[0])\n", " sent=movestopwords(query)\n", " query_vec=vectorizer.transform([sent])\n", " #print(query_vec[0])\n", " #计算相似度最高的前5个\n", " \n", " all_result=cosine_similarity(qlist_vec,query_vec)[0]\n", " print(all_result)\n", " cos_list=list(all_result[0])\n", " top_idxs =map(cos_list.index, heapq.nlargest(5, cos_list)) \n", " #print(top_idxs)\n", " top_answer= [] # top_idxs存放相似度最高的(存在qlist里的)问题的下标 \n", " # hint: 请使用 priority queue来找出top results. 思考为什么可以这么做?\n", " with open('a_list.json','r')as file:\n", " a_list=file.read()\n", " alist=json.loads(a_list)\n", " for index in top_idxs:\n", " top_answer.append(alist[index])\n", " return top_answer # 返回相似度最高的问题对应的答案,作为TOP5答案 " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# TODO: 编写几个测试用例,并输出结果\n", "sent1=movestopwords(\"when did Beyonce start singing\")\n", "print (get_top_results_tfidf_noindex(sent1))\n", "sent2=movestopwords(\"what is machine learning\")\n", "print (get_top_results_tfidf_noindex(sent2))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "你会发现上述的程序很慢,没错! 是因为循环了所有库里的问题。为了优化这个过程,我们需要使用一种数据结构叫做```倒排表```。 使用倒排表我们可以把单词和出现这个单词的文档做关键。 之后假如要搜索包含某一个单词的文档,即可以非常快速的找出这些文档。 在这个QA系统上,我们首先使用倒排表来快速查找包含至少一个单词的文档,然后再进行余弦相似度的计算,即可以大大减少```时间复杂度```。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 3.2 倒排表的创建\n", "倒排表的创建其实很简单,最简单的方法就是循环所有的单词一遍,然后记录每一个单词所出现的文档,然后把这些文档的ID保存成list即可。我们可以定义一个类似于```hash_map```, 比如 ``inverted_index = {}``, 然后存放包含每一个关键词的文档出现在了什么位置,也就是,通过关键词的搜索首先来判断包含这些关键词的文档(比如出现至少一个),然后对于candidates问题做相似度比较。" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def create_inverted_index(qlist_dic):\n", " word_total = [item.lower() for item in qlist_dic.keys()]\n", " inverted_idx = {} # 定一个一个简单的倒排表,是一个map结构。 循环所有qlist一遍就可以\n", " with open('q_list.json') as file:\n", " q_list=file.read()\n", " qlist=json.loads(q_list)\n", "\n", " for index,i in enumerate(word_total):\n", " tmp=[]\n", " j=0\n", " while j<len(qlist):\n", " field=qlist[j]\n", " split_field=field.split()\n", " if i in split_field:\n", " tmp.append(j)\n", " j+=1\n", " inverted_idx[i]=tmp\n", " \n", " with open('inverted_idx.json','w') as file_object:\n", " json.dump(inverted_idx,file_object)\n", " return inverted_idx\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 3.3 语义相似度\n", "这里有一个问题还需要解决,就是语义的相似度。可以这么理解: 两个单词比如car, auto这两个单词长得不一样,但从语义上还是类似的。如果只是使用倒排表我们不能考虑到这些单词之间的相似度,这就导致如果我们搜索句子里包含了``car``, 则我们没法获取到包含auto的所有的文档。所以我们希望把这些信息也存下来。那这个问题如何解决呢? 其实也不难,可以提前构建好相似度的关系,比如对于``car``这个单词,一开始就找好跟它意思上比较类似的单词比如top 10,这些都标记为``related words``。所以最后我们就可以创建一个保存``related words``的一个``map``. 比如调用``related_words['car']``即可以调取出跟``car``意思上相近的TOP 10的单词。 \n", "\n", "那这个``related_words``又如何构建呢? 在这里我们仍然使用``Glove``向量,然后计算一下俩俩的相似度(余弦相似度)。之后对于每一个词,存储跟它最相近的top 10单词,最终结果保存在``related_words``里面。 这个计算需要发生在离线,因为计算量很大,复杂度为``O(V*V)``, V是单词的总数。 \n", "\n", "这个计算过程的代码请放在``related.py``的文件里,然后结果保存在``related_words.txt``里。 我们在使用的时候直接从文件里读取就可以了,不用再重复计算。所以在此notebook里我们就直接读取已经计算好的结果。 作业提交时需要提交``related.py``和``related_words.txt``文件,这样在使用的时候就不再需要做这方面的计算了。" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# TODO 读取语义相关的单词\n", "def get_related_words(file):\n", " \n", " return related_words\n", "\n", "related_words = get_related_words('related_words.txt') # 直接放在文件夹的根目录下,不要修改此路径。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 3.4 利用倒排表搜索\n", "在这里,我们使用倒排表先获得一批候选问题,然后再通过余弦相似度做精准匹配,这样一来可以节省大量的时间。搜索过程分成两步:\n", "\n", "- 使用倒排表把候选问题全部提取出来。首先,对输入的新问题做分词等必要的预处理工作,然后对于句子里的每一个单词,从``related_words``里提取出跟它意思相近的top 10单词, 然后根据这些top词从倒排表里提取相关的文档,把所有的文档返回。 这部分可以放在下面的函数当中,也可以放在外部。\n", "- 然后针对于这些文档做余弦相似度的计算,最后排序并选出最好的答案。\n", "\n", "可以适当定义自定义函数,使得减少重复性代码" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#第一大模块\n", "#处理查询句子\n", "def q_movestopwords(sentence):\n", " from nltk.corpus import stopwords\n", " # 英文的大写全部改成小写\n", " q_list_lower = sentence.lower() \n", "\n", " # 把问题拆解成一个一个单词\n", " word_tokens =q_list_lower.strip().split()\n", " # 过滤掉停用词\n", " ## 增加stopwords\n", " stop_words = set(stopwords.words('english'))\n", " stop_words.update(['.', ',', '\"', \"'\", '?', '!', ':', ';', '(', ')', '[', ']', '{', '}'])\n", " filtered_words = [word for word in word_tokens if word not in stop_words]\n", " filtered_sent=\" \".join(filtered_words)\n", "\n", " return filtered_words,filtered_sent\n", "\n", "\n", "\n", "#第二大模块\n", "#取得查询语句意义相近的词\n", "def get_related_words(filtered_list):\n", " import json\n", " # 读取word的相似词字典.json\n", " with open('related_words.json', 'r') as fp:\n", " related_words_dic = json.load(fp)\n", " \n", " simi_list = []\n", " for w in filtered_list:\n", " temp = related_words_dic[w]\n", " temp_list = temp[0]\n", " simi_list += temp_list\n", " return simi_list\n", "\n", "#取得相近的词在整个问题列表中的索引\n", "def get_inverted_idx(simi_list):\n", " import json\n", " # 读取word的倒排表字典.json\n", " with open('inverted_idx.json', 'r') as fp:\n", " inverted_idx_dic = json.load(fp)\n", " \n", " # 查找问题的index列表\n", " question_list_idnex = []\n", " for word in simi_list:\n", " \n", " # 只找字典中有的单词进行查找\n", " if word in inverted_idx_dic:\n", " question_list_idnex += inverted_idx_dic[word]\n", "\n", " # 清除重复的问题的index \n", " index_q = list(set(question_list_idnex))\n", " #print(index_q)\n", " return index_q\n", "\n", "#找到问题与问题集相似的所有问题\n", "def get_total_q(index_q):\n", " import json\n", " with open('q_list.json','r')as file:\n", " tem_qlist=file.read()\n", " qlist=json.loads(tem_qlist)\n", " #print(qlist.shape)\n", " q_total_list = []\n", " for q in index_q:\n", " q_total_list.append(qlist[q])\n", " #print(q_total_list)\n", " return q_total_list\n", "\n", " \n", "#找出前k个余弦相似度最高的问题\n", "def get_cos_list(query,q_total_list,k):\n", " import heapq\n", " from sklearn.feature_extraction.text import TfidfVectorizer\n", " from sklearn.metrics.pairwise import cosine_similarity\n", " vectorizer = TfidfVectorizer()\n", " #query_list=query.strip().split()\n", " q_list_tfidf=vectorizer.fit_transform(q_total_list)\n", " sent_tfidf=vectorizer.transform([query])\n", " all_result=cosine_similarity(sent_tfidf,q_list_tfidf)\n", " #排序并记录最相似问题的索引\n", " cos_list=list(all_result[0])\n", " max_index =map(cos_list.index, heapq.nlargest(k, cos_list)) \n", " return max_index\n", "#找出最相似的问题 \n", "def get_top_answer(max_index,q_total_clean_list):\n", " import json\n", " with open('a_list.json', 'r') as file:\n", " answers = file.read()\n", " alist = json.loads(answers)\n", " #由于记录最相似词在问题列表中的位置的索引表和问题的句子向量一一对应,故通过后者在列表中位置问题qlist的索引\n", " top_index=[]\n", " for i in max_index:\n", " if i <len(q_total_clean_list):\n", " top_index.append(q_total_clean_list[i])\n", " \n", " top_answer=[] \n", " for i in top_index:\n", " top_answer.append([alist[i]])\n", " return top_answer\n", " \n", "#合并所有流程\n", "def get_top_results_tfidf(query):\n", " import time\n", " # 找出与问题与问题集中相似的问题\n", " filtered_words,filtered_sent=q_movestopwords(query)\n", " simi_list=get_related_words(filtered_words)\n", " q_total_index=get_inverted_idx(simi_list)\n", " q_total_list=get_total_q(q_total_index) \n", " \n", " #取得新问题对相关问题的相似度\n", " start_time=time.time()\n", " max_index = get_cos_list(filtered_sent,q_total_list,5)\n", " end_time=time.time()\n", " print(end_time-start_time)\n", " #取得最接近的答案list\n", " top_answer=get_top_answer(max_index,q_total_index)\n", " return top_answer\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#利用前面定义好的将文档转为glove向量的函数,query的glove表示单独处理\n", "def get_w2v_list(filtered_sent,q_total_list,k):\n", " import heapq\n", " import numpy as np\n", " q_list_w2v=get_glove_vec(q_total_list)\n", " with open(\"glove.json\",'r')as file:\n", " glove=file.read()\n", " glove_dic=json.loads(glove)\n", " #获取query的glove向量表示\n", " query=filtered_sent.strip().split()\n", " query_len=len(query)\n", " query_vec=[]\n", " for word in query:\n", " if word in glove_dic: \n", " query_vec.append(glove_dic[word])\n", " final_vec=np.sum(np.array(query_vec))/query_len \n", " all_result=cosine_similarity(final_vec,q_list_w2v)\n", " #排序并记录最相似问题的索引\n", " cos_list=list(all_result[0])\n", " max_index =map(cos_list.index, heapq.nlargest(k, cos_list)) \n", " return max_index\n", "#复用已有函数,只改变获取文档向量表示的函数\n", "def get_top_results_w2v(query):\n", " import time\n", " # 找出与问题与问题集中相似的问题\n", " filtered_words,filtered_sent=q_movestopwords(query)\n", " simi_list=get_related_words(filtered_words)\n", " q_total_index=get_inverted_idx(simi_list)\n", " q_total_list=get_total_q(q_total_index) \n", " \n", " #取得新问题对相关问题的相似度\n", " start_time=time.time()\n", " max_index = get_w2v_list(filtered_sent,q_total_list,5)\n", " end_time=time.time()\n", " print(end_time-start_time)\n", " #取得最接近的答案list\n", " top_answer=get_top_answer(max_index,q_total_index)\n", " return top_answer\n", " " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#利用前面定义好的将文档转为bert向量的函数,query的glove表示单独处理\n", "def get_bert_list(filtered_sent,q_total_list,k):\n", " import heapq\n", " q_list_bert=get_bert_vec(q_total_list)\n", " import numpy as np\n", " from bert_embedding import BertEmbedding\n", " bert_embedding = BertEmbedding(model='bert_12_768_12', dataset_name='book_corpus_wiki_en_cased')\n", " \n", " query=filtered_sent.strip().split()\n", " result = bert_embedding(query)\n", " item=result[0]\n", " query_bert=np.sum(np.array(item[1],axis=0))/len(item[0])\n", " all_result=cosine_similarity(query_bert,q_list_bert)\n", " #排序并记录最相似问题的索引\n", " cos_list=list(all_result[0])\n", " max_index =map(cos_list.index, heapq.nlargest(k, cos_list)) \n", " return max_index\n", "#复用已有函数,只改变获取文档向量表示的函数\n", "def get_top_results_bert(query):\n", " import time\n", " # 找出与问题与问题集中相似的问题\n", " filtered_words,filtered_sent=q_movestopwords(query)\n", " simi_list=get_related_words(filtered_words)\n", " q_total_index=get_inverted_idx(simi_list)\n", " q_total_list=get_total_q(q_total_index) \n", " \n", " #取得新问题对相关问题的相似度\n", " start_time=time.time()\n", " max_index = get_bert_list(filtered_sent,q_total_list,5)\n", " end_time=time.time()\n", " print(end_time-start_time)\n", " #取得最接近的答案list\n", " top_answer=get_top_answer(max_index,q_total_index)\n", " return top_answer\n", " " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# TODO: 编写几个测试用例,并输出结果\n", "\n", "test_query1 = \"when did Beyonce start singing\"\n", "test_query2 = \"what is machine learning\"\n", "\n", "print (get_top_results_tfidf(test_query1))\n", "print (get_top_results_w2v(test_query1))\n", "print (get_top_results_bert(test_query1))\n", "\n", "print (get_top_results_tfidf(test_query2))\n", "print (get_top_results_w2v(test_query2))\n", "print (get_top_results_bert(test_query2))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 4. 拼写纠错\n", "其实用户在输入问题的时候,不能期待他一定会输入正确,有可能输入的单词的拼写错误的。这个时候我们需要后台及时捕获拼写错误,并进行纠正,然后再通过修正之后的结果再跟库里的问题做匹配。这里我们需要实现一个简单的拼写纠错的代码,然后自动去修复错误的单词。\n", "\n", "这里使用的拼写纠错方法是课程里讲过的方法,就是使用noisy channel model。 我们回想一下它的表示:\n", "\n", "$c^* = \\text{argmax}_{c\\in candidates} ~~p(c|s) = \\text{argmax}_{c\\in candidates} ~~p(s|c)p(c)$\n", "\n", "这里的```candidates```指的是针对于错误的单词的候选集,这部分我们可以假定是通过edit_distance来获取的(比如生成跟当前的词距离为1/2的所有的valid 单词。 valid单词可以定义为存在词典里的单词。 ```c```代表的是正确的单词, ```s```代表的是用户错误拼写的单词。 所以我们的目的是要寻找出在``candidates``里让上述概率最大的正确写法``c``。 \n", "\n", "$p(s|c)$,这个概率我们可以通过历史数据来获得,也就是对于一个正确的单词$c$, 有百分之多少人把它写成了错误的形式1,形式2... 这部分的数据可以从``spell_errors.txt``里面找得到。但在这个文件里,我们并没有标记这个概率,所以可以使用uniform probability来表示。这个也叫做channel probability。\n", "\n", "$p(c)$,这一项代表的是语言模型,也就是假如我们把错误的$s$,改造成了$c$, 把它加入到当前的语句之后有多通顺?在本次项目里我们使用bigram来评估这个概率。 举个例子: 假如有两个候选 $c_1, c_2$, 然后我们希望分别计算出这个语言模型的概率。 由于我们使用的是``bigram``, 我们需要计算出两个概率,分别是当前词前面和后面词的``bigram``概率。 用一个例子来表示:\n", "\n", "给定: ``We are go to school tomorrow``, 对于这句话我们希望把中间的``go``替换成正确的形式,假如候选集里有个,分别是``going``, ``went``, 这时候我们分别对这俩计算如下的概率:\n", "$p(going|are)p(to|going)$和 $p(went|are)p(to|went)$, 然后把这个概率当做是$p(c)$的概率。 然后再跟``channel probability``结合给出最终的概率大小。\n", "\n", "那这里的$p(are|going)$这些bigram概率又如何计算呢?答案是训练一个语言模型! 但训练一个语言模型需要一些文本数据,这个数据怎么找? 在这次项目作业里我们会用到``nltk``自带的``reuters``的文本类数据来训练一个语言模型。当然,如果你有资源你也可以尝试其他更大的数据。最终目的就是计算出``bigram``概率。 " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 4.1 训练一个语言模型\n", "在这里,我们使用``nltk``自带的``reuters``数据来训练一个语言模型。 使用``add-one smoothing``" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from nltk.corpus import reuters\n", "\n", "# 读取语料库的数据\n", "categories = reuters.categories()\n", "corpus = reuters.sents(categories=categories)\n", "\n", "# 文本整理成list和把所有不重复单词整理成list\n", "all_word_list = []\n", "all_word = []\n", "\n", "for file in reuters.fileids():\n", " file_word = reuters.words(file)\n", " all_word += list(file_word)\n", " all_word_list += [list(file_word)]\n", "# 去除重复单词\n", "total_word = list(set(all_word))\n", "\n", "\n", "import json\n", "# total_word和All_word资料整理成json\n", "file_name_1 = 'all_word_list.json'\n", "file_name_2 = 'total_word.json'\n", "file_name_3 = 'all_word.json'\n", "\n", "with open(file_name_1,'w') as file_object:\n", " json.dump(all_word_list, file_object)\n", "with open(file_name_2,'w') as file_object:\n", " json.dump(total_word, file_object)\n", "with open(file_name_3,'w') as file_object:\n", " json.dump(all_word, file_object) \n", "#计算bigram的几率值\n", "\n", "# 循环所有的语料库并构建bigram probability. bigram[word1][word2]: 在word1出现的情况下下一个是word2的概率。 \n", "\n", "\n", "def get_bigram_pro(word1,word2,all_word_list = None,total_word = None):\n", " if all_word_list == None:\n", " import json\n", " # 读取所有问题\n", " with open('all_word_list.json','r') as file:\n", " all_word_list = json.load(file)\n", " if total_word == None:\n", " import json\n", " # 读取所有不重复单字\n", " with open('total_word.json','r') as file:\n", " total_word = json.load(file)\n", " # 所有文檔内文字列表\n", " from nltk.util import ngrams\n", " text_bigrams = [ngrams(sent, 2) for sent in all_word_list]\n", " text_unigrams = [ngrams(sent, 1) for sent in all_word_list]\n", " \n", " # 计算数量\n", " from nltk.lm import NgramCounter\n", " ngram_counts = NgramCounter(text_bigrams + text_unigrams) \n", " \n", " #计算几率用add-one smoothing\n", " word_count = ngram_counts[word1]\n", " join_count = ngram_counts[[word1]][word2]\n", " total_word_length = len(total_word)\n", " \n", " bigram_probability = (join_count + 1) / ( word_count + total_word_length)\n", " return bigram_probability\n", "\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 4.2 构建Channel Probs\n", "基于``spell_errors.txt``文件构建``channel probability``, 其中$channel[c][s]$表示正确的单词$c$被写错成$s$的概率。 " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "channel = {}\n", "\n", "spell_error_dict = {}\n", "\n", "for line in open('spell-errors.txt'):\n", " item = line.split(\":\")\n", " word = item[0].strip()\n", " spell_error_list = [word.strip( )for word in item[1].strip().split(\",\")]\n", " spell_error_dict[word] = spell_error_list\n", "# print_format(\"spell_error_dict\", spell_error_dict)\n", "\n", "for key in spell_error_dict:\n", " if key not in channel:\n", " channel[key] = {}\n", " for value in spell_error_dict[key]:\n", " channel[key][value] = 1 / len(spell_error_dict[key])\n", "print(channel['raining']['rainning'])\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 4.3 根据错别字生成所有候选集合\n", "给定一个错误的单词,首先生成跟这个单词距离为1或者2的所有的候选集合。 这部分的代码我们在课程上也讲过,可以参考一下。 " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ " #找出所有英文字\n", "def word_lower(text):\n", " import re\n", " return re.findall('[a-z]+',text.lower())\n", "\n", "def known(words):\n", " # 打开单词列表\n", " import json \n", " with open('total_word.json','r') as file:\n", " words_N = json.load(file) \n", " \n", " return set(w for w in words if w in words_N)\n", "#获取编辑距离为1的单词\n", "def edits1(word):\n", " # 替换的英文字\n", " alphabet = \"abcdefghijklmnopqrstuvwxyz\"\n", " \n", " n = len(word)\n", " #删除\n", " s1 = [word[0:i]+word[i+1:] for i in range(n)]\n", " #调换相邻的两个字母\n", " s2 = [word[0:i]+word[i+1]+word[i]+word[i+2:] for i in range(n-1)]\n", " #replace\n", " s3 = [word[0:i]+c+word[i+1:] for i in range(n) for c in alphabet]\n", " #插入\n", " s4 = [word[0:i]+c+word[i:] for i in range(n+1) for c in alphabet]\n", " edits1_words = set(s1+s2+s3+s4)\n", " edits1_words.remove(word)\n", " edits1_words = known(edits1_words)\n", " return edits1_words\n", "\n", "#2\n", "def edits2(word):\n", " edits2_words = set(e2 for e1 in edits1(word) for e2 in edits1(e1))\n", " #edits2_words.remove(word)\n", " edits2_words = known(edits2_words)\n", " return edits2_words\n", "\n", "def generate_candidates(word):\n", " #大写变小写\n", " word_lower_clean = word_lower(word)\n", " word_clean = word_lower_clean[0]\n", " \n", " # 打开单词列表\n", " import json \n", " with open('total_word.json','r') as file:\n", " words_N = json.load(file)\n", " \n", " # 纠错\n", " if word_clean not in words_N:\n", " candidates = edits1(word_clean) or edits2(word_clean)\n", " return candidates\n", " else:\n", " return None\n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 4.4 给定一个输入,如果有错误需要纠正\n", "\n", "给定一个输入``query``, 如果这里有些单词是拼错的,就需要把它纠正过来。这部分的实现可以简单一点: 对于``query``分词,然后把分词后的每一个单词在词库里面搜一下,假设搜不到的话可以认为是拼写错误的! 人如果拼写错误了再通过``channel``和``bigram``来计算最适合的候选。" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def get_bigram_chnnel_pro(tokens , index, all_word_list, total_word):\n", " # 前后文的单字\n", " front_word = tokens[index-1]\n", " now_word = tokens[index]\n", " after_word = tokens[index+1]\n", " \n", " #前后文的几率\n", " front_pro = get_bigram_pro(now_word , front_word , all_word_list , total_word)\n", " after_pro = get_bigram_pro(now_word , after_word , all_word_list , total_word)\n", " \n", " #计算bigram几率\n", " bigram_pro = front_pro * after_pro\n", " \n", " return bigram_pro\n", "def spell_corector(line):\n", " # 1. 首先做分词,然后把``line``表示成``tokens``\n", " # 2. 循环每一token, 然后判断是否存在词库里。如果不存在就意味着是拼写错误的,需要修正。 \n", " # 修正的过程就使用上述提到的``noisy channel model``, 然后从而找出最好的修正之后的结果。\n", " \n", " # 句子先tokens,数据清洗\n", " line_clean = line.replace('?','')\n", " tokens = line_clean.split()\n", " \n", " \n", " #读取要使用的单字清单\n", " import json\n", " # 读取所有不重复单字\n", " with open('all_word_list.json','r') as file:\n", " all_word_list = json.load(file)\n", " # 读取还有所有问题\n", " with open('total_word.json','r') as file:\n", " total_word = json.load(file)\n", " \n", " # 逐个单字检查\n", " t = 0\n", " sentence = []\n", " \n", " while t < len(tokens):\n", " if tokens[t] not in total_word:\n", "\n", " # 找出备选相似词\n", " simility_list = list(generate_candidates(tokens[t]))\n", " \n", " # 列出备选项的几率列表,并找出几率最高的单词\n", " simility_pro_list = [get_bigram_chnnel_pro(tokens , t , all_word_list, total_word) for i in simility_list]\n", " \n", " # 避免专有名词影响,若找不到相似词,当专有名词\n", " if simility_pro_list == []:\n", " sentence += [tokens[t]]\n", " else:\n", " word_index = simility_pro_list.index(max(simility_pro_list))\n", " correct_word = simility_list[word_index]\n", " \n", " # 组成list\n", " sentence += [correct_word]\n", " \n", " else:\n", " sentence += [tokens[t]]\n", " \n", " t += 1\n", "\n", " # 将单词组合成句子\n", " newline = ' '.join(sentence)\n", " \n", " \n", " return newline " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 4.5 基于拼写纠错算法,实现用户输入自动矫正\n", "首先有了用户的输入``query``, 然后做必要的处理把句子转换成tokens的形状,然后对于每一个token比较是否是valid, 如果不是的话就进行下面的修正过程。 " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "test_query1 = \"when did beyonce staart singing\" # 拼写错误的\n", "test_query2 = \"what is machin learning\" # 拼写错误的\n", "\n", "test_query1 = spell_corector(test_query1)\n", "test_query2 = spell_corector(test_query2)\n", "\n", "print (get_top_results_tfidf(test_query1))\n", "print (get_top_results_w2v(test_query1))\n", "print (get_top_results_bert(test_query1))\n", "\n", "print (get_top_results_tfidf(test_query2))\n", "print (get_top_results_w2v(test_query2))\n", "print (get_top_results_bert(test_query2))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 附录 \n", "在本次项目中我们实现了一个简易的问答系统。基于这个项目,我们其实可以有很多方面的延伸。\n", "- 在这里,我们使用文本向量之间的余弦相似度作为了一个标准。但实际上,我们也可以基于基于包含关键词的情况来给一定的权重。比如一个单词跟related word有多相似,越相似就意味着相似度更高,权重也会更大。 \n", "- 另外 ,除了根据词向量去寻找``related words``也可以提前定义好同义词库,但这个需要大量的人力成本。 \n", "- 在这里,我们直接返回了问题的答案。 但在理想情况下,我们还是希望通过问题的种类来返回最合适的答案。 比如一个用户问:“明天北京的天气是多少?”, 那这个问题的答案其实是一个具体的温度(其实也叫做实体),所以需要在答案的基础上做进一步的抽取。这项技术其实是跟信息抽取相关的。 \n", "- 对于词向量,我们只是使用了``average pooling``, 除了average pooling,我们也还有其他的经典的方法直接去学出一个句子的向量。\n", "- 短文的相似度分析一直是业界和学术界一个具有挑战性的问题。在这里我们使用尽可能多的同义词来提升系统的性能。但除了这种简单的方法,可以尝试其他的方法比如WMD,或者适当结合parsing相关的知识点。 " ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "好了,祝你好运! " ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.7" } }, "nbformat": 4, "nbformat_minor": 2 }