{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## 搭建一个简单的问答系统 (Building a Simple QA System)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "本次项目的目标是搭建一个基于检索式的简易的问答系统,这是一个最经典的方法也是最有效的方法。 \n", "\n", "```不要单独创建一个文件,所有的都在这里面编写,不要试图改已经有的函数名字 (但可以根据需求自己定义新的函数)```\n", "\n", "```预估完成时间```: 5-10小时" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 检索式的问答系统\n", "问答系统所需要的数据已经提供,对于每一个问题都可以找得到相应的答案,所以可以理解为每一个样本数据是 ``<问题、答案>``。 那系统的核心是当用户输入一个问题的时候,首先要找到跟这个问题最相近的已经存储在库里的问题,然后直接返回相应的答案即可(但实际上也可以抽取其中的实体或者关键词)。 举一个简单的例子:\n", "\n", "假设我们的库里面已有存在以下几个<问题,答案>:\n", "- <\"贪心学院主要做什么方面的业务?”, “他们主要做人工智能方面的教育”>\n", "- <“国内有哪些做人工智能教育的公司?”, “贪心学院”>\n", "- <\"人工智能和机器学习的关系什么?\", \"其实机器学习是人工智能的一个范畴,很多人工智能的应用要基于机器学习的技术\">\n", "- <\"人工智能最核心的语言是什么?\", ”Python“>\n", "- .....\n", "\n", "假设一个用户往系统中输入了问题 “贪心学院是做什么的?”, 那这时候系统先去匹配最相近的“已经存在库里的”问题。 那在这里很显然是 “贪心学院是做什么的”和“贪心学院主要做什么方面的业务?”是最相近的。 所以当我们定位到这个问题之后,直接返回它的答案 “他们主要做人工智能方面的教育”就可以了。 所以这里的核心问题可以归结为计算两个问句(query)之间的相似度。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 项目中涉及到的任务描述\n", "问答系统看似简单,但其中涉及到的内容比较多。 在这里先做一个简单的解释,总体来讲,我们即将要搭建的模块包括:\n", "\n", "- 文本的读取: 需要从相应的文件里读取```(问题,答案)```\n", "- 文本预处理: 清洗文本很重要,需要涉及到```停用词过滤```等工作\n", "- 文本的表示: 如果表示一个句子是非常核心的问题,这里会涉及到```tf-idf```, ```Glove```以及```BERT Embedding```\n", "- 文本相似度匹配: 在基于检索式系统中一个核心的部分是计算文本之间的```相似度```,从而选择相似度最高的问题然后返回这些问题的答案\n", "- 倒排表: 为了加速搜索速度,我们需要设计```倒排表```来存储每一个词与出现的文本\n", "- 词义匹配:直接使用倒排表会忽略到一些意思上相近但不完全一样的单词,我们需要做这部分的处理。我们需要提前构建好```相似的单词```然后搜索阶段使用\n", "- 拼写纠错:我们不能保证用户输入的准确,所以第一步需要做用户输入检查,如果发现用户拼错了,我们需要及时在后台改正,然后按照修改后的在库里面搜索\n", "- 文档的排序: 最后返回结果的排序根据文档之间```余弦相似度```有关,同时也跟倒排表中匹配的单词有关\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 项目中需要的数据:\n", "1. ```dev-v2.0.json```: 这个数据包含了问题和答案的pair, 但是以JSON格式存在,需要编写parser来提取出里面的问题和答案。 \n", "2. ```glove.6B```: 这个文件需要从网上下载,下载地址为:https://nlp.stanford.edu/projects/glove/, 请使用d=200的词向量\n", "3. ```spell-errors.txt``` 这个文件主要用来编写拼写纠错模块。 文件中第一列为正确的单词,之后列出来的单词都是常见的错误写法。 但这里需要注意的一点是我们没有给出他们之间的概率,也就是p(错误|正确),所以我们可以认为每一种类型的错误都是```同等概率```\n", "4. ```vocab.txt``` 这里列了几万个英文常见的单词,可以用这个词库来验证是否有些单词被拼错\n", "5. ```testdata.txt``` 这里搜集了一些测试数据,可以用来测试自己的spell corrector。这个文件只是用来测试自己的程序。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "在本次项目中,你将会用到以下几个工具:\n", "- ```sklearn```。具体安装请见:http://scikit-learn.org/stable/install.html sklearn包含了各类机器学习算法和数据处理工具,包括本项目需要使用的词袋模型,均可以在sklearn工具包中找得到。 \n", "- ```jieba```,用来做分词。具体使用方法请见 https://github.com/fxsjy/jieba\n", "- ```bert embedding```: https://github.com/imgarylai/bert-embedding\n", "- ```nltk```:https://www.nltk.org/index.html" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 第一部分:对于训练数据的处理:读取文件和预处理" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "- ```文本的读取```: 需要从文本中读取数据,此处需要读取的文件是```dev-v2.0.json```,并把读取的文件存入一个列表里(list)\n", "- ```文本预处理```: 对于问题本身需要做一些停用词过滤等文本方面的处理\n", "- ```可视化分析```: 对于给定的样本数据,做一些可视化分析来更好地理解数据" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 1.1节: 文本的读取\n", "把给定的文本数据读入到```qlist```和```alist```当中,这两个分别是列表,其中```qlist```是问题的列表,```alist```是对应的答案列表" ] }, { "cell_type": "code", "execution_count": 139, "metadata": {}, "outputs": [], "source": [ "# json_data['data'][0]['paragraphs'][0]['qas']" ] }, { "cell_type": "code", "execution_count": 152, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(86821, 86821)" ] }, "execution_count": 152, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import json\n", "def read_corpus():\n", " \"\"\"\n", " 读取给定的语料库,并把问题列表和答案列表分别写入到 qlist, alist 里面。 在此过程中,不用对字符换做任何的处理(这部分需要在 Part 2.3里处理)\n", " qlist = [\"问题1\", “问题2”, “问题3” ....]\n", " alist = [\"答案1\", \"答案2\", \"答案3\" ....]\n", " 务必要让每一个问题和答案对应起来(下标位置一致)\n", " \"\"\"\n", " # TODO 需要完成的代码部分 ...\n", " \n", " # 读取文件\n", " with open('train-v2.0.json', 'r') as f:\n", " json_data = json.load(f)\n", " \n", " # 解析json\n", " qlist = []\n", " alist = []\n", " datas = json_data['data']\n", " for data in datas:\n", " paragraphs = data['paragraphs']\n", " for para in paragraphs:\n", " qas = para['qas']\n", " for qa in qas:\n", " ques = qa['question']\n", " ans = qa['answers']\n", " if ans:\n", " an = [i['text'] for i in ans]\n", " qlist.append(ques)\n", " alist.append(an)\n", " \n", " assert len(qlist) == len(alist) # 确保长度一样\n", " return qlist, alist\n", "\n", "qlist, alist = read_corpus()\n", "len(qlist), len(alist)" ] }, { "cell_type": "code", "execution_count": 153, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "('Julian Fontana tried to find his way where before moving to Paris?',\n", " ['England'])" ] }, "execution_count": 153, "metadata": {}, "output_type": "execute_result" } ], "source": [ "num = 1001\n", "qlist[num], alist[num]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 1.2 理解数据(可视化分析/统计信息)\n", "对数据的理解是任何AI工作的第一步, 需要对数据有个比较直观的认识。在这里,简单地统计一下:\n", "\n", "- 在```qlist```出现的总单词个数\n", "- 按照词频画一个```histogram``` plot" ] }, { "cell_type": "code", "execution_count": 154, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "46386\n" ] } ], "source": [ "# TODO: 统计一下在qlist中总共出现了多少个单词? 总共出现了多少个不同的单词(unique word)?\n", "# 这里需要做简单的分词,对于英文我们根据空格来分词即可,其他过滤暂不考虑(只需分词)\n", "from collections import Counter\n", "import re\n", "word_list = []\n", "for ques in qlist:\n", " word_list.extend(re.sub('[!.?]+', '', ques).lower().split())\n", "\n", "# 统计\n", "word_counter = Counter(word_list)\n", "\n", "word_total = len(word_counter)\n", "print (word_total)" ] }, { "cell_type": "code", "execution_count": 155, "metadata": {}, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "# 单词词频\n", "cts = [i for i in word_counter.values()]\n", "# 统计排序\n", "cts_counter = Counter(cts)\n", "sorted_cts_counter = sorted(cts_counter.items(), key=lambda x: x[0])" ] }, { "cell_type": "code", "execution_count": 156, "metadata": {}, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYQAAAD4CAYAAADsKpHdAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8vihELAAAACXBIWXMAAAsTAAALEwEAmpwYAAARQ0lEQVR4nO3df6zd9V3H8efLFhiOwfhRSG0727lmsRDHRlNZMAuuKt2mggkkJVH6B6YGWdyi0VCXOP2jyTBxGKKgOJAytwGyTZpluJEysywhwGVj42flbnSjttJOkBETcIW3f5xPs3Nuv21vb2977/H7fCTfnO95n+/ne94n3PK63+/3c743VYUkST811w1IkuYHA0GSBBgIkqTGQJAkAQaCJKlZONcNzNRZZ51Vy5cvn+s2JGmsPProoz+sqkVdr41tICxfvpyJiYm5bkOSxkqS7x/sNU8ZSZIAA0GS1BgIkiTAQJAkNQaCJAkwECRJjYEgSQJ6GAiP7HiRT351O/+77425bkWS5pXeBcI3v/8SNz4wyb43DARJGta7QJAkdTMQJEmAgSBJagwESRJgIEiSmt4GQtVcdyBJ80vvAiGZ6w4kaX7qXSBIkroZCJIkwECQJDUGgiQJMBAkSU1vA8FZp5I0qneBEJx3KkldehcIkqRuBoIkCTAQJEmNgSBJAnocCOXd7SRpRO8CwZvbSVK33gWCJKmbgSBJAqYRCEmWJflakqeTPJnkI61+RpL7kzzbHk8fGrMpyWSS7UkuGapfkOTx9tqNyeAETpKTktzV6g8lWX4MPqsk6RCmc4SwD/ijqvp54ELg2iSrgOuAbVW1EtjWntNeWw+cC6wDbkqyoO3rZmAjsLIt61r9auClqnoHcANw/Sx8NknSEThsIFTV7qr6Zlt/BXgaWAJcCmxpm20BLmvrlwJ3VtVrVfUcMAmsSbIYOLWqHqzBFJ87pozZv697gLX7jx4kScfHEV1DaKdy3g08BJxTVbthEBrA2W2zJcDzQ8N2ttqStj61PjKmqvYBLwNndrz/xiQTSSb27t17JK0fwEmnkjRq2oGQ5BTg88BHq+pHh9q0o1aHqB9qzGih6paqWl1VqxctWnS4liVJR2BagZDkBAZh8Jmq+kIrv9BOA9Ee97T6TmDZ0PClwK5WX9pRHxmTZCFwGvDikX4YSdLMTWeWUYBbgaer6pNDL20FNrT1DcC9Q/X1bebQCgYXjx9up5VeSXJh2+dVU8bs39flwAPlV4kl6bhaOI1tLgJ+B3g8yWOt9qfAJ4C7k1wN/AC4AqCqnkxyN/AUgxlK11bV623cNcDtwMnAfW2BQeB8OskkgyOD9Uf3sSRJR+qwgVBV36D7HD/A2oOM2Qxs7qhPAOd11F+lBYokaW74TWVJEtDjQPAKhSSN6l0g+H03SerWu0CQJHUzECRJgIEgSWoMBEkSYCBIkpr+BoLTTiVpRO8CwUmnktStd4EgSepmIEiSAANBktQYCJIkwECQJDW9DYRy3qkkjehdIHizU0nq1rtAkCR1MxAkSYCBIElqDARJEmAgSJKa3gZCOetUkkb0LhCcdSpJ3XoXCJKkbgaCJAkwECRJjYEgSQJ6HAhOMpKkUb0LhHh3O0nq1LtAkCR1MxAkSYCBIElqDARJEjCNQEhyW5I9SZ4Yqv15kv9I8lhbPjj02qYkk0m2J7lkqH5BksfbazemXd1NclKSu1r9oSTLZ/kzSpKmYTpHCLcD6zrqN1TV+W35MkCSVcB64Nw25qYkC9r2NwMbgZVt2b/Pq4GXquodwA3A9TP8LEekvLudJI04bCBU1deBF6e5v0uBO6vqtap6DpgE1iRZDJxaVQ/W4P/EdwCXDY3Z0tbvAdbmGM4NddapJHU7mmsIH07ynXZK6fRWWwI8P7TNzlZb0tan1kfGVNU+4GXgzK43TLIxyUSSib179x5F65KkqWYaCDcDPwecD+wG/qrVu37/rkPUDzXmwGLVLVW1uqpWL1q06IgaliQd2owCoapeqKrXq+oN4B+ANe2lncCyoU2XArtafWlHfWRMkoXAaUz/FJUkaZbMKBDaNYH9fgvYPwNpK7C+zRxaweDi8cNVtRt4JcmF7frAVcC9Q2M2tPXLgQfKK76SdNwtPNwGST4HXAyclWQn8HHg4iTnMzi1swP4PYCqejLJ3cBTwD7g2qp6ve3qGgYzlk4G7msLwK3Ap5NMMjgyWD8Ln0uSdIQOGwhVdWVH+dZDbL8Z2NxRnwDO66i/ClxxuD5mm4cgkjSqd99UdtapJHXrXSBIkroZCJIkwECQJDUGgiQJMBAkSU1vA8GvvknSqP4Fgrc7laRO/QsESVInA0GSBBgIkqTGQJAkAQaCJKnpbSCU9zuVpBG9CwQnnUpSt94FgiSpm4EgSQIMBElSYyBIkoA+B4KTjCRpRO8CwXvbSVK33gWCJKmbgSBJAgwESVJjIEiSAANBktT0NhCcdSpJo3oXCPH2dpLUqXeBIEnqZiBIkgADQZLUGAiSJMBAkCQ1vQ2Ect6pJI04bCAkuS3JniRPDNXOSHJ/kmfb4+lDr21KMplke5JLhuoXJHm8vXZjMrjvaJKTktzV6g8lWT7Ln3HK5zmWe5ek8TWdI4TbgXVTatcB26pqJbCtPSfJKmA9cG4bc1OSBW3MzcBGYGVb9u/zauClqnoHcANw/Uw/jCRp5g4bCFX1deDFKeVLgS1tfQtw2VD9zqp6raqeAyaBNUkWA6dW1YNVVcAdU8bs39c9wNr9Rw+SpONnptcQzqmq3QDt8exWXwI8P7TdzlZb0tan1kfGVNU+4GXgzK43TbIxyUSSib17986wdUlSl9m+qNz1m30don6oMQcWq26pqtVVtXrRokUzbFGS1GWmgfBCOw1Ee9zT6juBZUPbLQV2tfrSjvrImCQLgdM48BSVJOkYm2kgbAU2tPUNwL1D9fVt5tAKBhePH26nlV5JcmG7PnDVlDH793U58EC7znBMlfc7laQRCw+3QZLPARcDZyXZCXwc+ARwd5KrgR8AVwBU1ZNJ7gaeAvYB11bV621X1zCYsXQycF9bAG4FPp1kksGRwfpZ+WQH+zzHcueSNMYOGwhVdeVBXlp7kO03A5s76hPAeR31V2mBIkmaO739prIkaZSBIEkCDARJUmMgSJKAHgeCdzuVpFG9CwTvkiRJ3XoXCJKkbgaCJAkwECRJjYEgSQJ6HAhOMpKkUb0LhHh7O0nq1LtAkCR1MxAkSYCBIElqDARJEmAgSJKa3gbCcfizzZI0VvoXCM46laRO/QsESVInA0GSBBgIkqTGQJAkAQaCJKnpbSA461SSRvUuEJx1KkndehcIkqRuBoIkCTAQJEmNgSBJAgwESVJjIEiSgB4GQuLEU0nq0rtAkCR1O6pASLIjyeNJHksy0WpnJLk/ybPt8fSh7TclmUyyPcklQ/UL2n4mk9wYf42XpONuNo4Qfrmqzq+q1e35dcC2qloJbGvPSbIKWA+cC6wDbkqyoI25GdgIrGzLulnoS5J0BI7FKaNLgS1tfQtw2VD9zqp6raqeAyaBNUkWA6dW1YM1+LuWdwyNkSQdJ0cbCAV8NcmjSTa22jlVtRugPZ7d6kuA54fG7my1JW19av0ASTYmmUgysXfv3qNsXZI0bOFRjr+oqnYlORu4P8kzh9i267pAHaJ+YLHqFuAWgNWrVx/V/Uq926kkjTqqI4Sq2tUe9wBfBNYAL7TTQLTHPW3zncCyoeFLgV2tvrSjfkx4tVqSus04EJK8Oclb9q8DvwY8AWwFNrTNNgD3tvWtwPokJyVZweDi8cPttNIrSS5ss4uuGhojSTpOjuaU0TnAF9sM0YXAZ6vqX5M8Atyd5GrgB8AVAFX1ZJK7gaeAfcC1VfV629c1wO3AycB9bZEkHUczDoSq+h7wro76fwFrDzJmM7C5oz4BnDfTXiRJR89vKkuSAANBktT0NhCqe2arJPVW7wLBuyRJUrfeBYIkqZuBIEkCDARJUmMgSJKAHgeCN7eTpFG9CwRnGUlSt94FgiSpm4EgSQIMBElSYyBIkgADQZLU9DYQnHUqSaN6FwjxrypLUqfeBYIkqZuBIEkCDARJUmMgSJIAA0GS1PQ2EMrbnUrSiN4Fgnc7laRuvQsESVI3A0GSBBgIkqTGQJAkAQaCJKnpbSA46VSSRvU2ECRJowwESRJgIEiSGgNBkgTMo0BIsi7J9iSTSa6b634kqW/mRSAkWQD8LfABYBVwZZJVx+K9dr/8KgCv/vj1Y7F7SRpbC+e6gWYNMFlV3wNIcidwKfDUbL/RD195DYAP3fgNVp59ymzvXpKOuT9Yu5LfeNfPzPp+50sgLAGeH3q+E/jFqRsl2QhsBHjb2942ozf643Xv5FPfeA6AlecYCJLGz2knn3BM9jtfAqHrptQHfHesqm4BbgFYvXr1jL5bdtLCBez4xIdmMlSS/l+bF9cQGBwRLBt6vhTYNUe9SFIvzZdAeARYmWRFkhOB9cDWOe5JknplXpwyqqp9ST4MfAVYANxWVU/OcVuS1CvzIhAAqurLwJfnug9J6qv5cspIkjTHDARJEmAgSJIaA0GSBECqxvNvhyXZC3x/hsPPAn44i+0cb+Pc/zj3DvY/l8a5d5g//f9sVS3qemFsA+FoJJmoqtVz3cdMjXP/49w72P9cGufeYTz695SRJAkwECRJTV8D4Za5buAojXP/49w72P9cGufeYQz67+U1BEnSgfp6hCBJmsJAkCQBPQyEJOuSbE8ymeS6OezjtiR7kjwxVDsjyf1Jnm2Ppw+9tqn1vD3JJUP1C5I83l67MUla/aQkd7X6Q0mWz2Lvy5J8LcnTSZ5M8pEx6/9NSR5O8u3W/1+MU/9t/wuSfCvJl8aw9x3tfR9LMjFO/Sd5a5J7kjzTfv7fOy69T0tV9WZhcGvt7wJvB04Evg2smqNe3ge8B3hiqPaXwHVt/Trg+ra+qvV6ErCifYYF7bWHgfcy+Ktz9wEfaPXfB/6ura8H7prF3hcD72nrbwH+vfU4Lv0HOKWtnwA8BFw4Lv23ff4h8FngS+P0s9P2uQM4a0ptLPoHtgC/29ZPBN46Lr1P6/Mdzzeb66X9B/jK0PNNwKY57Gc5o4GwHVjc1hcD27v6ZPB3I97btnlmqH4l8PfD27T1hQy+IZlj9DnuBX51HPsHfhr4JoO/4T0W/TP4i4LbgPfzk0AYi97bPndwYCDM+/6BU4Hnpu5rHHqf7tK3U0ZLgOeHnu9stfninKraDdAez271g/W9pK1PrY+Mqap9wMvAmbPdcDukfTeD37LHpv92yuUxYA9wf1WNU/9/DfwJ8MZQbVx6h8HfS/9qkkeTbByj/t8O7AX+sZ2u+1SSN49J79PSt0BIR20c5t0erO9DfZ5j/lmTnAJ8HvhoVf3oUJsepJc567+qXq+q8xn8tr0myXmH2Hze9J/k14E9VfXodIccpI+5/Nm5qKreA3wAuDbJ+w6x7XzqfyGD07w3V9W7gf9hcIroYOZT79PSt0DYCSwber4U2DVHvXR5IcligPa4p9UP1vfOtj61PjImyULgNODF2Wo0yQkMwuAzVfWFcet/v6r6b+DfgHVj0v9FwG8m2QHcCbw/yT+NSe8AVNWu9rgH+CKwZkz63wnsbEeTAPcwCIhx6H1a+hYIjwArk6xIciKDizZb57inYVuBDW19A4Nz8/vr69sMhBXASuDhdnj6SpIL2yyFq6aM2b+vy4EHqp2YPFrtvW4Fnq6qT45h/4uSvLWtnwz8CvDMOPRfVZuqamlVLWfw8/tAVf32OPQOkOTNSd6yfx34NeCJcei/qv4TeD7JO1tpLfDUOPQ+bcfrYsV8WYAPMpgV813gY3PYx+eA3cCPGfxWcDWDc4XbgGfb4xlD23+s9bydNiOh1Vcz+Af1XeBv+Mm3z98E/DMwyWBGw9tnsfdfYnAY+x3gsbZ8cIz6/wXgW63/J4A/a/Wx6H/ovS/mJxeVx6J3Bufhv92WJ/f/Gxyj/s8HJtrPzr8Ap49L79NZvHWFJAno3ykjSdJBGAiSJMBAkCQ1BoIkCTAQJEmNgSBJAgwESVLzf22cm0m6AqupAAAAAElFTkSuQmCC\n", "text/plain": [ "<Figure size 432x288 with 1 Axes>" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "# TODO: 统计一下qlist中出现1次,2次,3次... 出现的单词个数, 然后画一个plot. 这里的x轴是单词出现的次数(1,2,3,..), y轴是单词个数。\n", "# 从左到右分别是 出现1次的单词数,出现2次的单词数,出现3次的单词数... \n", "\n", "import matplotlib.pyplot as plt\n", "# 单词词频\n", "cts = [i for i in word_counter.values()]\n", "# 统计排序\n", "cts_counter = Counter(cts)\n", "sorted_cts_counter = sorted(cts_counter.items(), key=lambda x: x[0])\n", "\n", "# 画图\n", "x = [i[0] for i in sorted_cts_counter]\n", "y = [i[1] for i in sorted_cts_counter]\n", "plt.plot(x, y)\n", "plt.show()" ] }, { "cell_type": "code", "execution_count": 157, "metadata": {}, "outputs": [], "source": [ "# TODO: 从上面的图中能观察到什么样的现象? 这样的一个图的形状跟一个非常著名的函数形状很类似,能所出此定理吗? \n", "# hint: [XXX]'s law\n", "# 现象: 出现频率较少的词语个数较多, 出现频率较多的词语, 其个数呈现出长尾分布\n", "# 定理: " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 1.3 文本预处理\n", "此部分需要做文本方面的处理。 以下是可以用到的一些方法:\n", "\n", "- 1. 停用词过滤 (去网上搜一下 \"english stop words list\",会出现很多包含停用词库的网页,或者直接使用NLTK自带的) \n", "- 2. 转换成lower_case: 这是一个基本的操作 \n", "- 3. 去掉一些无用的符号: 比如连续的感叹号!!!, 或者一些奇怪的单词。\n", "- 4. 去掉出现频率很低的词:比如出现次数少于10,20.... (想一下如何选择阈值)\n", "- 5. 对于数字的处理: 分词完只有有些单词可能就是数字比如44,415,把所有这些数字都看成是一个单词,这个新的单词我们可以定义为 \"#number\"\n", "- 6. lemmatization: 在这里不要使用stemming, 因为stemming的结果有可能不是valid word。\n" ] }, { "cell_type": "code", "execution_count": 158, "metadata": {}, "outputs": [], "source": [ "# TODO: 需要做文本方面的处理。 从上述几个常用的方法中选择合适的方法给qlist做预处理(不一定要按照上面的顺序,不一定要全部使用)\n", "import nltk\n", "from nltk.corpus import stopwords\n", "from nltk import word_tokenize, pos_tag\n", "from nltk.corpus import wordnet\n", "from nltk.stem import WordNetLemmatizer\n", "\n", "# nltk.download('stopwords')\n", "# nltk.download('punkt')\n", "# nltk.download('averaged_perceptron_tagger')\n", "stop_words = stopwords.words('english')\n", "wnl = WordNetLemmatizer()\n", "\n", "# 获取单词的词性\n", "def get_wordnet_pos(tag):\n", " if tag.startswith('J'):\n", " return wordnet.ADJ\n", " elif tag.startswith('V'):\n", " return wordnet.VERB\n", " elif tag.startswith('N'):\n", " return wordnet.NOUN\n", " elif tag.startswith('R'):\n", " return wordnet.ADV\n", " else:\n", " return wordnet.NOUN\n", "\n", "import unicodedata\n", "def strip_accents(string, accents=('COMBINING ACUTE ACCENT', 'COMBINING GRAVE ACCENT', 'COMBINING TILDE')):\n", " accents = set(map(unicodedata.lookup, accents))\n", " chars = [c for c in unicodedata.normalize('NFD', string) if c not in accents]\n", " return unicodedata.normalize('NFC', ''.join(chars))\n", "\n", "def qlist_preprocess(qlist):\n", " for i in range(len(qlist)):\n", "\n", " ques = re.sub('\\d+', '#number', qlist[i]) # 数字变成统一字符\n", " ques = re.sub(\"\\'.\", '', ques) # 's, 'm 等过滤\n", " ques = ques.replace('-', ' ')\n", "\n", " # ques = word_tokenize(re.sub('[!.?,]+', '', ques)) # 去除标点, 分词\n", " ques = re.sub('[!.?,\\\"]+', '', ques).split() # 去除标点, 分词\n", "\n", " ques = [i.lower() for i in ques] # 转小写\n", "\n", " # lemmatization\n", " tagged_sent = pos_tag(ques)\n", " ques = [wnl.lemmatize(tag[0], pos=get_wordnet_pos(tag[1])) for tag in tagged_sent]\n", "\n", " ques = [i for i in ques if i not in stop_words] # 去停用词\n", "\n", " # deaccent\n", " ques = [strip_accents(i) for i in ques]\n", "\n", " qlist[i] = ques\n", " return qlist\n", "\n", " # qlist = # 更新后的问题列表" ] }, { "cell_type": "code", "execution_count": 159, "metadata": {}, "outputs": [], "source": [ "qlist = qlist_preprocess(qlist)" ] }, { "cell_type": "code", "execution_count": 202, "metadata": {}, "outputs": [], "source": [ "# qlist" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 第二部分: 文本的表示\n", "当我们做完必要的文本处理之后就需要想办法表示文本了,这里有几种方式\n", "\n", "- 1. 使用```tf-idf vector```\n", "- 2. 使用embedding技术如```word2vec```, ```bert embedding```等\n", "\n", "下面我们分别提取这三个特征来做对比。 " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 2.1 使用tf-idf表示向量\n", "把```qlist```中的每一个问题的字符串转换成```tf-idf```向量, 转换之后的结果存储在```X```矩阵里。 ``X``的大小是: ``N* D``的矩阵。 这里``N``是问题的个数(样本个数),\n", "``D``是词典库的大小" ] }, { "cell_type": "code", "execution_count": 230, "metadata": {}, "outputs": [], "source": [ "# TODO \n", "from sklearn.feature_extraction.text import TfidfVectorizer\n", "vectorizer = TfidfVectorizer(use_idf=True, smooth_idf=True, norm=None) # 定义一个tf-idf的vectorizer\n", "X = vectorizer.fit_transform([' '.join(i) for i in qlist]) \n", "X_tfidf = X # 结果存放在X矩阵里" ] }, { "cell_type": "code", "execution_count": 231, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(86821, 30608)" ] }, "execution_count": 231, "metadata": {}, "output_type": "execute_result" } ], "source": [ "X_tfidf.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 2.2 使用wordvec + average pooling\n", "词向量方面需要下载: https://nlp.stanford.edu/projects/glove/ (请下载``glove.6B.zip``),并使用``d=200``的词向量(200维)。国外网址如果很慢,可以在百度上搜索国内服务器上的。 每个词向量获取完之后,即可以得到一个句子的向量。 我们通过``average pooling``来实现句子的向量。 " ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "def get_word_dict(file):\n", " word_dict = dict()\n", " f = open('glove.6B.200d.txt', 'r')\n", " for line in f:\n", " items = line.strip().split()\n", " word = items[0]\n", " vec = np.array(items[1:]).astype('float')\n", " word_dict[word] = vec\n", " # break\n", " f.close()\n", " return word_dict" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [], "source": [ "word_dict = get_word_dict('glove.6B.200d.txt')" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [], "source": [ "def get_emb(qlist):\n", " d = 200\n", " emb = np.zeros((len(qlist), d)) # 存储所有qlist的embedding\n", " for i, ques in enumerate(qlist):\n", " vec = np.zeros(d) # 当前句子\n", " length = len(ques) # 句子长度\n", " for word in ques:\n", " try:\n", " vec += word_dict[word]\n", " except KeyError as e:\n", " vec += word_dict['unk']\n", " vec = vec / length\n", " emb[i] = vec\n", " return emb " ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/Users/zbh/anaconda3/envs/py37/lib/python3.7/site-packages/ipykernel_launcher.py:12: RuntimeWarning: invalid value encountered in true_divide\n", " if sys.path[0] == '':\n" ] } ], "source": [ "# TODO 基于Glove向量获取句子向量\n", "emb = get_emb(qlist) # 这是 D*H的矩阵,这里的D是词典库的大小, H是词向量的大小。 这里面我们给定的每个单词的词向量,这需要从文本中读取\n", "\n", "X_w2v = emb # 初始化完emb之后就可以对每一个句子来构建句子向量了,这个过程使用average pooling来实现" ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(86821, 200)" ] }, "execution_count": 25, "metadata": {}, "output_type": "execute_result" } ], "source": [ "emb.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 2.3 使用BERT + average pooling\n", "最近流行的BERT也可以用来学出上下文相关的词向量(contex-aware embedding), 在很多问题上得到了比较好的结果。在这里,我们不做任何的训练,而是直接使用已经训练好的BERT embedding。 具体如何训练BERT将在之后章节里体会到。 为了获取BERT-embedding,可以直接下载已经训练好的模型从而获得每一个单词的向量。可以从这里获取: https://github.com/imgarylai/bert-embedding , 请使用```bert_12_768_12```\t当然,你也可以从其他source获取也没问题,只要是合理的词向量。 " ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [], "source": [ "# !pip install bert-embedding" ] }, { "cell_type": "code", "execution_count": 122, "metadata": {}, "outputs": [], "source": [ "from bert_embedding import BertEmbedding\n", "bert_embedding = BertEmbedding(model='bert_12_768_12')" ] }, { "cell_type": "code", "execution_count": 123, "metadata": {}, "outputs": [], "source": [ "# import time\n", "# t1 = time.time()\n", "# results = bert_embedding(qq[:1000])\n", "# print(time.time() - t1)" ] }, { "cell_type": "code", "execution_count": 135, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0\n", "1000\n", "2000\n", "3000\n", "4000\n", "5000\n", "6000\n", "7000\n", "8000\n", "9000\n", "10000\n", "11000\n", "12000\n", "13000\n", "14000\n", "15000\n", "16000\n", "17000\n", "18000\n", "19000\n", "20000\n", "21000\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "/Users/zbh/anaconda3/envs/py37/lib/python3.7/site-packages/numpy/core/fromnumeric.py:3373: RuntimeWarning: Mean of empty slice.\n", " out=out, **kwargs)\n", "/Users/zbh/anaconda3/envs/py37/lib/python3.7/site-packages/numpy/core/_methods.py:170: RuntimeWarning: invalid value encountered in double_scalars\n", " ret = ret.dtype.type(ret / rcount)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "22000\n", "23000\n", "24000\n", "25000\n", "26000\n", "27000\n", "28000\n", "29000\n", "30000\n", "31000\n", "32000\n", "33000\n", "34000\n", "35000\n", "36000\n", "37000\n", "38000\n", "39000\n", "40000\n", "41000\n", "42000\n", "43000\n", "44000\n", "45000\n", "46000\n", "47000\n", "48000\n", "49000\n", "50000\n", "51000\n", "52000\n", "53000\n", "54000\n", "55000\n", "56000\n", "57000\n", "58000\n", "59000\n", "60000\n", "61000\n", "62000\n", "63000\n", "64000\n", "65000\n", "66000\n", "67000\n", "68000\n", "69000\n", "70000\n", "71000\n", "72000\n", "73000\n", "74000\n", "75000\n", "76000\n", "77000\n", "78000\n", "79000\n", "80000\n", "81000\n", "82000\n", "83000\n", "84000\n", "85000\n", "86000\n" ] } ], "source": [ "results = np.zeros((len(qlist), 768))\n", "qlist_sentences = [' '.join(i) for i in qlist]\n", "for i, ques in enumerate(qlist_sentences):\n", " sentence, arrs = bert_embedding([ques])[0]\n", " # print(sentence)\n", "\n", " vecs = np.array(arrs)\n", " vec = np.mean(vecs, axis=0)\n", " # print(vec.shape)\n", " \n", " if i % 1000 == 0:\n", " print(i)\n", " \n", " results[i] = vec\n", " \n", " # if i == 200:\n", " # break" ] }, { "cell_type": "code", "execution_count": 138, "metadata": {}, "outputs": [], "source": [ "# TODO 基于BERT的句子向量计算\n", "\n", "X_bert = results # 每一个句子的向量结果存放在X_bert矩阵里。行数为句子的总个数,列数为一个句子embedding大小。 " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 第三部分: 相似度匹配以及搜索\n", "在这部分里,我们需要把用户每一个输入跟知识库里的每一个问题做一个相似度计算,从而得出最相似的问题。但对于这个问题,时间复杂度其实很高,所以我们需要结合倒排表来获取相似度最高的问题,从而获得答案。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 3.1 tf-idf + 余弦相似度\n", "我们可以直接基于计算出来的``tf-idf``向量,计算用户最新问题与库中存储的问题之间的相似度,从而选择相似度最高的问题的答案。这个方法的复杂度为``O(N)``, ``N``是库中问题的个数。" ] }, { "cell_type": "code", "execution_count": 566, "metadata": {}, "outputs": [], "source": [ "def get_top_results_tfidf_noindex(query):\n", " # TODO 需要编写\n", " \"\"\"\n", " 给定用户输入的问题 query, 返回最有可能的TOP 5问题。这里面需要做到以下几点:\n", " 1. 对于用户的输入 query 首先做一系列的预处理(上面提到的方法),然后再转换成tf-idf向量(利用上面的vectorizer)\n", " 2. 计算跟每个库里的问题之间的相似度\n", " 3. 找出相似度最高的top5问题的答案\n", " \"\"\"\n", " \n", " q_vector = vectorizer.transform([' '.join(qlist_preprocess([query])[0])])\n", " # 计算余弦相似度,tfidf默认l2范数;矩阵乘法\n", " sim = (X_tfidf * q_vector.T).toarray()\n", "\n", " \n", " # res = np.argsort(sim)\n", " # query = 'when beyonce start become popular?'\n", " # ans: array([[43410, 57532, 57531, ..., 39267, 145, 0]])\n", " \n", " \n", " # 使用优先队列找出top5\n", " pq = PriorityQueue()\n", " for cur in range(sim.shape[0]):\n", " pq.put((sim[cur][0], cur))\n", " if len(pq.queue) > 5:\n", " pq.get()\n", "\n", " pq_rank = sorted(pq.queue, reverse=True, key=lambda x:x[0])\n", " # print(pq_rank)\n", "\n", " top_idxs = [x[1] for x in pq_rank] # top_idxs存放相似度最高的(存在qlist里的)问题的下表\n", " # hint: 请使用 priority queue来找出top results. 思考为什么可以这么做? \n", " \n", "\n", " return [alist[i] for i in top_idxs] # 返回相似度最高的问题对应的答案,作为TOP5答案 " ] }, { "cell_type": "code", "execution_count": 567, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[['in the late 1990s'], ['Dangerously in Love Tour'], ['Particularly since the 1950s, pro wrestling events have frequently been responsible for sellout crowds at large arenas'], ['her mother'], ['Germany, the Netherlands, Switzerland, Latvia, Estonia and Hungary']]\n", "[['in the later 19th century'], ['Tags'], ['Ashoka'], ['economic, social, and cultural'], ['water buffalo']]\n" ] } ], "source": [ "# TODO: 编写几个测试用例,并输出结果\n", "print(get_top_results_tfidf_noindex(\"when beyonce start become popular?\"))\n", "print(get_top_results_tfidf_noindex(\"where jordge come from\"))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "你会发现上述的程序很慢,没错! 是因为循环了所有库里的问题。为了优化这个过程,我们需要使用一种数据结构叫做```倒排表```。 使用倒排表我们可以把单词和出现这个单词的文档做关键。 之后假如要搜索包含某一个单词的文档,即可以非常快速的找出这些文档。 在这个QA系统上,我们首先使用倒排表来快速查找包含至少一个单词的文档,然后再进行余弦相似度的计算,即可以大大减少```时间复杂度```。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 3.2 倒排表的创建\n", "倒排表的创建其实很简单,最简单的方法就是循环所有的单词一遍,然后记录每一个单词所出现的文档,然后把这些文档的ID保存成list即可。我们可以定义一个类似于```hash_map```, 比如 ``inverted_index = {}``, 然后存放包含每一个关键词的文档出现在了什么位置,也就是,通过关键词的搜索首先来判断包含这些关键词的文档(比如出现至少一个),然后对于candidates问题做相似度比较。" ] }, { "cell_type": "code", "execution_count": 531, "metadata": {}, "outputs": [], "source": [ "# TODO 请创建倒排表\n", "inverted_idx = {} # 定一个一个简单的倒排表,是一个map结构。 循环所有qlist一遍就可以\n", "\n", "for i, ques in enumerate(qlist):\n", " for word in ques:\n", " if word in inverted_idx.keys():\n", " inverted_idx[word].add(i)\n", " else:\n", " inverted_idx[word] = set([i])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 3.3 related_words.txt\n", "这里有一个问题还需要解决,就是语义的相似度。可以这么理解: 两个单词比如car, auto这两个单词长得不一样,但从语义上还是类似的。如果只是使用倒排表我们不能考虑到这些单词之间的相似度,这就导致如果我们搜索句子里包含了``car``, 则我们没法获取到包含auto的所有的文档。所以我们希望把这些信息也存下来。那这个问题如何解决呢? 其实也不难,可以提前构建好相似度的关系,比如对于``car``这个单词,一开始就找好跟它意思上比较类似的单词比如top 10,这些都标记为``related words``。所以最后我们就可以创建一个保存``related words``的一个``map``. 比如调用``related_words['car']``即可以调取出跟``car``意思上相近的TOP 10的单词。 \n", "\n", "那这个``related_words``又如何构建呢? 在这里我们仍然使用``Glove``向量,然后计算一下俩俩的相似度(余弦相似度)。之后对于每一个词,存储跟它最相近的top 10单词,最终结果保存在``related_words``里面。 这个计算需要发生在离线,因为计算量很大,复杂度为``O(V*V)``, V是单词的总数。 \n", "\n", "这个计算过程的代码请放在``related.py``的文件里,然后结果保存在``related_words.txt``里。 我们在使用的时候直接从文件里读取就可以了,不用再重复计算。所以在此notebook里我们就直接读取已经计算好的结果。 作业提交时需要提交``related.py``和``related_words.txt``文件,这样在使用的时候就不再需要做这方面的计算了。" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# TODO 读取语义相关的单词\n", "def get_related_words(file):\n", " f = open('related_words.txt')\n", " related_words = {}\n", " \n", " for line in f:\n", " items = line.strip().split()\n", " word = items[0]\n", " sim_words_10 = items[1].split(',')\n", " related_words[word] = sim_words_10\n", " f.close()\n", " return related_words\n", "\n", "related_words = get_related_words('related_words.txt') # 直接放在文件夹的根目录下,不要修改此路径。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 3.4 利用倒排表搜索\n", "在这里,我们使用倒排表先获得一批候选问题,然后再通过余弦相似度做精准匹配,这样一来可以节省大量的时间。搜索过程分成两步:\n", "\n", "- 使用倒排表把候选问题全部提取出来。首先,对输入的新问题做分词等必要的预处理工作,然后对于句子里的每一个单词,从``related_words``里提取出跟它意思相近的top 10单词, 然后根据这些top词从倒排表里提取相关的文档,把所有的文档返回。 这部分可以放在下面的函数当中,也可以放在外部。\n", "- 然后针对于这些文档做余弦相似度的计算,最后排序并选出最好的答案。\n", "\n", "可以适当定义自定义函数,使得减少重复性代码" ] }, { "cell_type": "code", "execution_count": 574, "metadata": {}, "outputs": [], "source": [ "from nltk.corpus import wordnet as wn\n", "def get_related_idx(query):\n", "\n", " related_set = set()\n", " query_words = qlist_preprocess([query])[0]\n", " for query_word in query_words:\n", " # 1. 读取文件\n", " # related_words_list = related_words[query_word]\n", " # 2. nltk\n", " related_words_list = [x.name().split(\".\")[0] for x in wn.synsets(query_word)][:10]\n", " for related_word in related_words_list:\n", " try:\n", " related_set = related_set | inverted_idx[related_word]\n", " except KeyError as e:\n", " # print(e)\n", " continue \n", " \n", " related_idx = list(related_set)\n", " return related_idx" ] }, { "cell_type": "code", "execution_count": 575, "metadata": {}, "outputs": [], "source": [ "def get_top_results_tfidf(query):\n", " \"\"\"\n", " 给定用户输入的问题 query, 返回最有可能的TOP 5问题。这里面需要做到以下几点:\n", " 1. 利用倒排表来筛选 candidate (需要使用related_words). \n", " 2. 对于候选文档,计算跟输入问题之间的相似度\n", " 3. 找出相似度最高的top5问题的答案\n", " \"\"\"\n", " \n", " q_vector = vectorizer.transform([' '.join(qlist_preprocess([query])[0])])\n", " \n", " related_index = get_related_idx(query)\n", " X = X_tfidf[related_index]\n", " \n", " \n", " # 计算余弦相似度,tfidf默认l2范数;矩阵乘法\n", " sim = (X * q_vector.T).toarray()\n", " \n", " # 使用优先队列找出top5\n", " pq = PriorityQueue()\n", " for cur in range(sim.shape[0]):\n", " pq.put((sim[cur][0], cur))\n", " if len(pq.queue) > 5:\n", " pq.get()\n", "\n", " pq_rank = sorted(pq.queue, reverse=True, key=lambda x:x[0])\n", " # print(pq_rank)\n", "\n", " top_idxs = [x[1] for x in pq_rank] # top_idxs存放相似度最高的(存在qlist里的)问题的下表\n", " # hint: 请使用 priority queue来找出top results. 思考为什么可以这么做? \n", " \n", "\n", " return [alist[i] for i in top_idxs] # 返回相似度最高的问题对应的答案,作为TOP5答案 " ] }, { "cell_type": "code", "execution_count": 606, "metadata": {}, "outputs": [], "source": [ "def vector_matrix(arr, brr):\n", " return arr.dot(brr.T) / (np.sqrt(np.sum(arr*arr)) * np.sqrt(np.sum(brr*brr, axis=1)))\n", "\n", "def get_top_results_w2v(query):\n", " \"\"\"\n", " 给定用户输入的问题 query, 返回最有可能的TOP 5问题。这里面需要做到以下几点:\n", " 1. 利用倒排表来筛选 candidate (需要使用related_words). \n", " 2. 对于候选文档,计算跟输入问题之间的相似度\n", " 3. 找出相似度最高的top5问题的答案\n", " \"\"\"\n", " \n", " ques = qlist_preprocess([query])[0]\n", " \n", " vec = np.zeros(200)\n", " length = len(ques) # 句子长度\n", " for word in ques:\n", " try:\n", " vec += word_dict[word]\n", " except KeyError as e:\n", " vec += word_dict['unk']\n", " vec = vec / length\n", " \n", " q_vector = vec\n", " \n", " related_index = get_related_idx(query)\n", " X = X_w2v[related_index]\n", " \n", " \n", " # 计算余弦相似度,tfidf默认l2范数;矩阵乘法\n", " sim = vector_matrix(q_vector, X).reshape(-1, 1)\n", " \n", " # 使用优先队列找出top5\n", " pq = PriorityQueue()\n", " for cur in range(sim.shape[0]):\n", " pq.put((sim[cur][0], cur))\n", " if len(pq.queue) > 5:\n", " pq.get()\n", "\n", " pq_rank = sorted(pq.queue, reverse=True, key=lambda x:x[0])\n", " # print(pq_rank)\n", "\n", " top_idxs = [x[1] for x in pq_rank] # top_idxs存放相似度最高的(存在qlist里的)问题的下表\n", " # hint: 请使用 priority queue来找出top results. 思考为什么可以这么做? \n", " \n", "\n", " return [alist[i] for i in top_idxs] # 返回相似度最高的问题对应的答案,作为TOP5答案 " ] }, { "cell_type": "code", "execution_count": 612, "metadata": {}, "outputs": [], "source": [ "def get_top_results_bert(query):\n", " \"\"\"\n", " 给定用户输入的问题 query, 返回最有可能的TOP 5问题。这里面需要做到以下几点:\n", " 1. 利用倒排表来筛选 candidate (需要使用related_words). \n", " 2. 对于候选文档,计算跟输入问题之间的相似度\n", " 3. 找出相似度最高的top5问题的答案\n", " \"\"\"\n", " \n", " sentence, arrs = bert_embedding([' '.join(qlist_preprocess([query])[0])])[0]\n", " # print(sentence)\n", "\n", " vecs = np.array(arrs)\n", " vec = np.mean(vecs, axis=0)\n", " q_vector = vec\n", " \n", " related_index = get_related_idx(query)\n", " X = X_bert[related_index]\n", " \n", " # 计算余弦相似度,tfidf默认l2范数;矩阵乘法\n", " sim = vector_matrix(q_vector, X).reshape(-1, 1)\n", " \n", " # 使用优先队列找出top5\n", " pq = PriorityQueue()\n", " for cur in range(sim.shape[0]):\n", " pq.put((sim[cur][0], cur))\n", " if len(pq.queue) > 5:\n", " pq.get()\n", "\n", " pq_rank = sorted(pq.queue, reverse=True, key=lambda x:x[0])\n", " # print(pq_rank)\n", "\n", " top_idxs = [x[1] for x in pq_rank] # top_idxs存放相似度最高的(存在qlist里的)问题的下表\n", " # hint: 请使用 priority queue来找出top results. 思考为什么可以这么做? \n", " \n", "\n", " return [alist[i] for i in top_idxs] # 返回相似度最高的问题对应的答案,作为TOP5答案 " ] }, { "cell_type": "code", "execution_count": 615, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[['in the late 1990s'], ['Fredericksburg'], ['02:28:01 PM China Standard Time'], ['132'], ['Link and Toon Link']]\n", "[['in the late 1990s'], ['late 1990s'], ['02:28:01 PM China Standard Time'], ['Kanye West'], ['Shakira']]\n", "[['in the late 1990s'], ['late 1990s'], ['Diana Ross.'], ['I Was Here'], ['Romantic era']]\n", "[['At Last'], ['J. S. Bach, Mozart and Schubert'], ['piano'], ['Polish'], ['Polish']]\n", "[['Beyoncé Cosmetology Center'], ['five.'], ['Madonna and Celine Dion'], ['April 15'], ['Baz Luhrmann']]\n", "[['Madonna and Celine Dion'], ['2013 Met Gala'], ['118 million'], ['Baz Luhrmann'], ['eight']]\n" ] } ], "source": [ "# TODO: 编写几个测试用例,并输出结果\n", "\n", "# query = \"when beyonce start become popular?\"\n", "# qlist_preprocess([query])[0]\n", "test_query1 = \"when beyonce start become popular?\"\n", "test_query2 = \"where jordge come from\"\n", "\n", "print (get_top_results_tfidf(test_query1))\n", "print (get_top_results_w2v(test_query1))\n", "print (get_top_results_bert(test_query1))\n", "\n", "print (get_top_results_tfidf(test_query2))\n", "print (get_top_results_w2v(test_query2))\n", "print (get_top_results_bert(test_query2))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 4. 拼写纠错\n", "其实用户在输入问题的时候,不能期待他一定会输入正确,有可能输入的单词的拼写错误的。这个时候我们需要后台及时捕获拼写错误,并进行纠正,然后再通过修正之后的结果再跟库里的问题做匹配。这里我们需要实现一个简单的拼写纠错的代码,然后自动去修复错误的单词。\n", "\n", "这里使用的拼写纠错方法是课程里讲过的方法,就是使用noisy channel model。 我们回想一下它的表示:\n", "\n", "$c^* = \\text{argmax}_{c\\in candidates} ~~p(c|s) = \\text{argmax}_{c\\in candidates} ~~p(s|c)p(c)$\n", "\n", "这里的```candidates```指的是针对于错误的单词的候选集,这部分我们可以假定是通过edit_distance来获取的(比如生成跟当前的词距离为1/2的所有的valid 单词。 valid单词可以定义为存在词典里的单词。 ```c```代表的是正确的单词, ```s```代表的是用户错误拼写的单词。 所以我们的目的是要寻找出在``candidates``里让上述概率最大的正确写法``c``。 \n", "\n", "$p(s|c)$,这个概率我们可以通过历史数据来获得,也就是对于一个正确的单词$c$, 有百分之多少人把它写成了错误的形式1,形式2... 这部分的数据可以从``spell_errors.txt``里面找得到。但在这个文件里,我们并没有标记这个概率,所以可以使用uniform probability来表示。这个也叫做channel probability。\n", "\n", "$p(c)$,这一项代表的是语言模型,也就是假如我们把错误的$s$,改造成了$c$, 把它加入到当前的语句之后有多通顺?在本次项目里我们使用bigram来评估这个概率。 举个例子: 假如有两个候选 $c_1, c_2$, 然后我们希望分别计算出这个语言模型的概率。 由于我们使用的是``bigram``, 我们需要计算出两个概率,分别是当前词前面和后面词的``bigram``概率。 用一个例子来表示:\n", "\n", "给定: ``We are go to school tomorrow``, 对于这句话我们希望把中间的``go``替换成正确的形式,假如候选集里有个,分别是``going``, ``went``, 这时候我们分别对这俩计算如下的概率:\n", "$p(going|are)p(to|going)$和 $p(went|are)p(to|went)$, 然后把这个概率当做是$p(c)$的概率。 然后再跟``channel probability``结合给出最终的概率大小。\n", "\n", "那这里的$p(are|going)$这些bigram概率又如何计算呢?答案是训练一个语言模型! 但训练一个语言模型需要一些文本数据,这个数据怎么找? 在这次项目作业里我们会用到``nltk``自带的``reuters``的文本类数据来训练一个语言模型。当然,如果你有资源你也可以尝试其他更大的数据。最终目的就是计算出``bigram``概率。 " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 4.1 训练一个语言模型\n", "在这里,我们使用``nltk``自带的``reuters``数据来训练一个语言模型。 使用``add-one smoothing``" ] }, { "cell_type": "code", "execution_count": 620, "metadata": {}, "outputs": [], "source": [ "# import nltk\n", "# nltk.download('reuters')" ] }, { "cell_type": "code", "execution_count": 621, "metadata": {}, "outputs": [], "source": [ "from nltk.corpus import reuters\n", "\n", "# 读取语料库的数据\n", "categories = reuters.categories()\n", "corpus = reuters.sents(categories=categories)\n", "\n", "# 循环所有的语料库并构建bigram probability. bigram[word1][word2]: 在word1出现的情况下下一个是word2的概率。 \n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 4.2 构建Channel Probs\n", "基于``spell_errors.txt``文件构建``channel probability``, 其中$channel[c][s]$表示正确的单词$c$被写错成$s$的概率。 " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# TODO 构建channel probability \n", "channel = {}\n", "\n", "for line in open('spell-errors.txt'):\n", " # TODO\n", "\n", "# TODO\n", "\n", "print(channel) " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 4.3 根据错别字生成所有候选集合\n", "给定一个错误的单词,首先生成跟这个单词距离为1或者2的所有的候选集合。 这部分的代码我们在课程上也讲过,可以参考一下。 " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def generate_candidates(word):\n", " # 基于拼写错误的单词,生成跟它的编辑距离为1或者2的单词,并通过词典库的过滤。\n", " # 只留写法上正确的单词。 \n", " \n", " \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 4.4 给定一个输入,如果有错误需要纠正\n", "\n", "给定一个输入``query``, 如果这里有些单词是拼错的,就需要把它纠正过来。这部分的实现可以简单一点: 对于``query``分词,然后把分词后的每一个单词在词库里面搜一下,假设搜不到的话可以认为是拼写错误的! 人如果拼写错误了再通过``channel``和``bigram``来计算最适合的候选。" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "\n", "def spell_corrector(line):\n", " # 1. 首先做分词,然后把``line``表示成``tokens``\n", " # 2. 循环每一token, 然后判断是否存在词库里。如果不存在就意味着是拼写错误的,需要修正。 \n", " # 修正的过程就使用上述提到的``noisy channel model``, 然后从而找出最好的修正之后的结果。 \n", " \n", " return newline # 修正之后的结果,假如用户输入没有问题,那这时候``newline = line``\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 4.5 基于拼写纠错算法,实现用户输入自动矫正\n", "首先有了用户的输入``query``, 然后做必要的处理把句子转换成tokens的形状,然后对于每一个token比较是否是valid, 如果不是的话就进行下面的修正过程。 " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "test_query1 = \"\" # 拼写错误的\n", "test_query2 = \"\" # 拼写错误的\n", "\n", "test_query1 = spell_corector(test_query1)\n", "test_query2 = spell_corector(test_query2)\n", "\n", "print (get_top_results_tfidf(test_query1))\n", "print (get_top_results_w2v(test_query1))\n", "print (get_top_results_bert(test_query1))\n", "\n", "print (get_top_results_tfidf(test_query2))\n", "print (get_top_results_w2v(test_query2))\n", "print (get_top_results_bert(test_query2))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 附录 \n", "在本次项目中我们实现了一个简易的问答系统。基于这个项目,我们其实可以有很多方面的延伸。\n", "- 在这里,我们使用文本向量之间的余弦相似度作为了一个标准。但实际上,我们也可以基于基于包含关键词的情况来给一定的权重。比如一个单词跟related word有多相似,越相似就意味着相似度更高,权重也会更大。 \n", "- 另外 ,除了根据词向量去寻找``related words``也可以提前定义好同义词库,但这个需要大量的人力成本。 \n", "- 在这里,我们直接返回了问题的答案。 但在理想情况下,我们还是希望通过问题的种类来返回最合适的答案。 比如一个用户问:“明天北京的天气是多少?”, 那这个问题的答案其实是一个具体的温度(其实也叫做实体),所以需要在答案的基础上做进一步的抽取。这项技术其实是跟信息抽取相关的。 \n", "- 对于词向量,我们只是使用了``average pooling``, 除了average pooling,我们也还有其他的经典的方法直接去学出一个句子的向量。\n", "- 短文的相似度分析一直是业界和学术界一个具有挑战性的问题。在这里我们使用尽可能多的同义词来提升系统的性能。但除了这种简单的方法,可以尝试其他的方法比如WMD,或者适当结合parsing相关的知识点。 " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "好了,祝你好运! " ] } ], "metadata": { "kernelspec": { "display_name": "py37", "language": "python", "name": "py37" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.9" } }, "nbformat": 4, "nbformat_minor": 4 }