Commit 34b086e7 by 20200519029

0629

parent c6e8b5f4
{
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"collapsed": true
},
"source": [
"## 搭建一个简单的问答系统 (Building a Simple QA System)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"本次项目的目标是搭建一个基于检索式的简易的问答系统,这是一个最经典的方法也是最有效的方法。 \n",
"\n",
"```不要单独创建一个文件,所有的都在这里面编写,不要试图改已经有的函数名字 (但可以根据需求自己定义新的函数)```\n",
"\n",
"```预估完成时间```: 5-10小时"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 检索式的问答系统\n",
"问答系统所需要的数据已经提供,对于每一个问题都可以找得到相应的答案,所以可以理解为每一个样本数据是 ``<问题、答案>``。 那系统的核心是当用户输入一个问题的时候,首先要找到跟这个问题最相近的已经存储在库里的问题,然后直接返回相应的答案即可(但实际上也可以抽取其中的实体或者关键词)。 举一个简单的例子:\n",
"\n",
"假设我们的库里面已有存在以下几个<问题,答案>:\n",
"- <\"贪心学院主要做什么方面的业务?”, “他们主要做人工智能方面的教育”>\n",
"- <“国内有哪些做人工智能教育的公司?”, “贪心学院”>\n",
"- <\"人工智能和机器学习的关系什么?\", \"其实机器学习是人工智能的一个范畴,很多人工智能的应用要基于机器学习的技术\">\n",
"- <\"人工智能最核心的语言是什么?\", ”Python“>\n",
"- .....\n",
"\n",
"假设一个用户往系统中输入了问题 “贪心学院是做什么的?”, 那这时候系统先去匹配最相近的“已经存在库里的”问题。 那在这里很显然是 “贪心学院是做什么的”和“贪心学院主要做什么方面的业务?”是最相近的。 所以当我们定位到这个问题之后,直接返回它的答案 “他们主要做人工智能方面的教育”就可以了。 所以这里的核心问题可以归结为计算两个问句(query)之间的相似度。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 项目中涉及到的任务描述\n",
"问答系统看似简单,但其中涉及到的内容比较多。 在这里先做一个简单的解释,总体来讲,我们即将要搭建的模块包括:\n",
"\n",
"- 文本的读取: 需要从相应的文件里读取```(问题,答案)```\n",
"- 文本预处理: 清洗文本很重要,需要涉及到```停用词过滤```等工作\n",
"- 文本的表示: 如果表示一个句子是非常核心的问题,这里会涉及到```tf-idf```, ```Glove```以及```BERT Embedding```\n",
"- 文本相似度匹配: 在基于检索式系统中一个核心的部分是计算文本之间的```相似度```,从而选择相似度最高的问题然后返回这些问题的答案\n",
"- 倒排表: 为了加速搜索速度,我们需要设计```倒排表```来存储每一个词与出现的文本\n",
"- 词义匹配:直接使用倒排表会忽略到一些意思上相近但不完全一样的单词,我们需要做这部分的处理。我们需要提前构建好```相似的单词```然后搜索阶段使用\n",
"- 拼写纠错:我们不能保证用户输入的准确,所以第一步需要做用户输入检查,如果发现用户拼错了,我们需要及时在后台改正,然后按照修改后的在库里面搜索\n",
"- 文档的排序: 最后返回结果的排序根据文档之间```余弦相似度```有关,同时也跟倒排表中匹配的单词有关\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 项目中需要的数据:\n",
"1. ```dev-v2.0.json```: 这个数据包含了问题和答案的pair, 但是以JSON格式存在,需要编写parser来提取出里面的问题和答案。 \n",
"2. ```glove.6B```: 这个文件需要从网上下载,下载地址为:https://nlp.stanford.edu/projects/glove/, 请使用d=200的词向量\n",
"3. ```spell-errors.txt``` 这个文件主要用来编写拼写纠错模块。 文件中第一列为正确的单词,之后列出来的单词都是常见的错误写法。 但这里需要注意的一点是我们没有给出他们之间的概率,也就是p(错误|正确),所以我们可以认为每一种类型的错误都是```同等概率```\n",
"4. ```vocab.txt``` 这里列了几万个英文常见的单词,可以用这个词库来验证是否有些单词被拼错\n",
"5. ```testdata.txt``` 这里搜集了一些测试数据,可以用来测试自己的spell corrector。这个文件只是用来测试自己的程序。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"在本次项目中,你将会用到以下几个工具:\n",
"- ```sklearn```。具体安装请见:http://scikit-learn.org/stable/install.html sklearn包含了各类机器学习算法和数据处理工具,包括本项目需要使用的词袋模型,均可以在sklearn工具包中找得到。 \n",
"- ```jieba```,用来做分词。具体使用方法请见 https://github.com/fxsjy/jieba\n",
"- ```bert embedding```: https://github.com/imgarylai/bert-embedding\n",
"- ```nltk```:https://www.nltk.org/index.html"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 第一部分:对于训练数据的处理:读取文件和预处理"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"- ```文本的读取```: 需要从文本中读取数据,此处需要读取的文件是```dev-v2.0.json```,并把读取的文件存入一个列表里(list)\n",
"- ```文本预处理```: 对于问题本身需要做一些停用词过滤等文本方面的处理\n",
"- ```可视化分析```: 对于给定的样本数据,做一些可视化分析来更好地理解数据"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 1.1节: 文本的读取\n",
"把给定的文本数据读入到```qlist```和```alist```当中,这两个分别是列表,其中```qlist```是问题的列表,```alist```是对应的答案列表"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"file_path = 'train-v2.0.json'\n",
"\n",
"def read_corpus():\n",
" \"\"\"\n",
" 读取给定的语料库,并把问题列表和答案列表分别写入到 qlist, alist 里面。 在此过程中,不用对字符换做任何的处理(这部分需要在 Part 2.3里处理)\n",
" qlist = [\"问题1\", “问题2”, “问题3” ....]\n",
" alist = [\"答案1\", \"答案2\", \"答案3\" ....]\n",
" 务必要让每一个问题和答案对应起来(下标位置一致)\n",
" \"\"\"\n",
" # TODO 需要完成的代码部分 ...\n",
" with open(file_path,encoding='utf-8') as f:\n",
" corpus = json.load(f) \n",
" \n",
" qlist=[]\n",
" alist=[]\n",
" for item in corpus['data']:\n",
" for para in item['paragraphs']:\n",
" for qas in para['qas']:\n",
" \n",
" qlist.append(qas['question'])\n",
" try:\n",
" alist.append(qas['answers'][0]['text'])\n",
" except IndexError:\n",
" qlist.pop()\n",
" \n",
" print('q',len(qlist),'a',len(alist))\n",
" assert len(qlist) == len(alist) # 确保长度一样\n",
" return qlist, alist\n",
"\n",
"(qlist, alist) = read_corpus()\n",
"qlist[:3]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 1.2 理解数据(可视化分析/统计信息)\n",
"对数据的理解是任何AI工作的第一步, 需要对数据有个比较直观的认识。在这里,简单地统计一下:\n",
"\n",
"- 在```qlist```出现的总单词个数\n",
"- 按照词频画一个```histogram``` plot"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# TODO: 统计一下在qlist中总共出现了多少个单词? 总共出现了多少个不同的单词(unique word)?\n",
"# 这里需要做简单的分词,对于英文我们根据空格来分词即可,其他过滤暂不考虑(只需分词)\n",
"\n",
"from collections import Counter\n",
"import matplotlib.pyplot as plt\n",
"\n",
"words_cnt = Counter()\n",
"\n",
"for text in qlist:\n",
" words_cnt.update(text.replace('.','').replace('?','').replace('!','').replace(',','').split(' '))\n",
"\n",
"value_sort = sorted(words_cnt.values(), reverse=True) \n",
"\n",
"plt.subplot(221)\n",
"plt.plot(value_sort)\n",
"plt.subplot(222)\n",
"plt.plot(value_sort[:2000])\n",
"plt.subplot(223)\n",
"plt.plot(value_sort[:200])\n",
"plt.subplot(224)\n",
"plt.plot(value_sort[:20])\n",
"plt.show()\n",
"\n",
"# 显示词频最高前10词,因为只取高频值,所以value转换时重合的概率较小,即时重合也没有太大影响\n",
"inverse = dict(zip(words_cnt.values(), words_cnt.keys()))\n",
"print(\"词数(type):%d\" % len(words_cnt))\n",
"print([[inverse[v], v] for v in value_sort[:20]])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# TODO: 统计一下qlist中出现1次,2次,3次... 出现的单词个数, 然后画一个plot. 这里的x轴是单词出现的次数(1,2,3,..), y轴是单词个数。\n",
"# 从左到右分别是 出现1次的单词数,出现2次的单词数,出现3次的单词数...\n",
"import numpy as np\n",
"cnt = Counter(words_cnt.values())\n",
"\n",
"print(len(cnt))\n",
"cnt = dict(cnt)\n",
"sorted_cnt = sorted(cnt.items(), key=lambda cnt: cnt[1])\n",
"\n",
"X = [x[0] for x in sorted_cnt]\n",
"Y = [x[1] for x in sorted_cnt]\n",
"\n",
"# x = list(sorted_cnt.keys())\n",
"# y = list(sorted_cnt.values())\n",
"# plt.plot(Y,X )\n",
"plt.subplot(311)\n",
"plt.plot(Y, X)\n",
"plt.subplot(312)\n",
"plt.xlim((0, 150))\n",
"plt.ylim((0, 400))\n",
"plt.plot(Y, X)\n",
"plt.subplot(313)\n",
"plt.xlim((0, 50))\n",
"plt.ylim((0, 500))\n",
"plt.plot(Y, X)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# TODO: 从上面的图中能观察到什么样的现象? 这样的一个图的形状跟一个非常著名的函数形状很类似,能所出此定理吗? \n",
"# hint: [XXX]'s law\n",
"# \n",
"# "
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": true
},
"source": [
"#### 1.3 文本预处理\n",
"此部分需要做文本方面的处理。 以下是可以用到的一些方法:\n",
"\n",
"- 1. 停用词过滤 (去网上搜一下 \"english stop words list\",会出现很多包含停用词库的网页,或者直接使用NLTK自带的) \n",
"- 2. 转换成lower_case: 这是一个基本的操作 \n",
"- 3. 去掉一些无用的符号: 比如连续的感叹号!!!, 或者一些奇怪的单词。\n",
"- 4. 去掉出现频率很低的词:比如出现次数少于10,20.... (想一下如何选择阈值)\n",
"- 5. 对于数字的处理: 分词完只有有些单词可能就是数字比如44,415,把所有这些数字都看成是一个单词,这个新的单词我们可以定义为 \"#number\"\n",
"- 6. lemmazation: 在这里不要使用stemming, 因为stemming的结果有可能不是valid word。\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# TODO: 需要做文本方面的处理。 从上述几个常用的方法中选择合适的方法给qlist做预处理(不一定要按照上面的顺序,不一定要全部使用)\n",
"\n",
"qlist = # 更新后的问题列表"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 第二部分: 文本的表示\n",
"当我们做完必要的文本处理之后就需要想办法表示文本了,这里有几种方式\n",
"\n",
"- 1. 使用```tf-idf vector```\n",
"- 2. 使用embedding技术如```word2vec```, ```bert embedding```等\n",
"\n",
"下面我们分别提取这三个特征来做对比。 "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 2.1 使用tf-idf表示向量\n",
"把```qlist```中的每一个问题的字符串转换成```tf-idf```向量, 转换之后的结果存储在```X```矩阵里。 ``X``的大小是: ``N* D``的矩阵。 这里``N``是问题的个数(样本个数),\n",
"``D``是词典库的大小"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# TODO \n",
"vectorizer = # 定义一个tf-idf的vectorizer\n",
"\n",
"X_tfidf = # 结果存放在X矩阵里"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 2.2 使用wordvec + average pooling\n",
"词向量方面需要下载: https://nlp.stanford.edu/projects/glove/ (请下载``glove.6B.zip``),并使用``d=200``的词向量(200维)。国外网址如果很慢,可以在百度上搜索国内服务器上的。 每个词向量获取完之后,即可以得到一个句子的向量。 我们通过``average pooling``来实现句子的向量。 "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# TODO 基于Glove向量获取句子向量\n",
"emb = # 这是 D*H的矩阵,这里的D是词典库的大小, H是词向量的大小。 这里面我们给定的每个单词的词向量,\n",
" # 这需要从文本中读取\n",
" \n",
"X_w2v = # 初始化完emb之后就可以对每一个句子来构建句子向量了,这个过程使用average pooling来实现\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 2.3 使用BERT + average pooling\n",
"最近流行的BERT也可以用来学出上下文相关的词向量(contex-aware embedding), 在很多问题上得到了比较好的结果。在这里,我们不做任何的训练,而是直接使用已经训练好的BERT embedding。 具体如何训练BERT将在之后章节里体会到。 为了获取BERT-embedding,可以直接下载已经训练好的模型从而获得每一个单词的向量。可以从这里获取: https://github.com/imgarylai/bert-embedding , 请使用```bert_12_768_12```\t当然,你也可以从其他source获取也没问题,只要是合理的词向量。 "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# TODO 基于BERT的句子向量计算\n",
"\n",
"X_bert = # 每一个句子的向量结果存放在X_bert矩阵里。行数为句子的总个数,列数为一个句子embedding大小。 "
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": true
},
"source": [
"### 第三部分: 相似度匹配以及搜索\n",
"在这部分里,我们需要把用户每一个输入跟知识库里的每一个问题做一个相似度计算,从而得出最相似的问题。但对于这个问题,时间复杂度其实很高,所以我们需要结合倒排表来获取相似度最高的问题,从而获得答案。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 3.1 tf-idf + 余弦相似度\n",
"我们可以直接基于计算出来的``tf-idf``向量,计算用户最新问题与库中存储的问题之间的相似度,从而选择相似度最高的问题的答案。这个方法的复杂度为``O(N)``, ``N``是库中问题的个数。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def get_top_results_tfidf_noindex(query):\n",
" # TODO 需要编写\n",
" \"\"\"\n",
" 给定用户输入的问题 query, 返回最有可能的TOP 5问题。这里面需要做到以下几点:\n",
" 1. 对于用户的输入 query 首先做一系列的预处理(上面提到的方法),然后再转换成tf-idf向量(利用上面的vectorizer)\n",
" 2. 计算跟每个库里的问题之间的相似度\n",
" 3. 找出相似度最高的top5问题的答案\n",
" \"\"\"\n",
" \n",
" top_idxs = [] # top_idxs存放相似度最高的(存在qlist里的)问题的下标 \n",
" # hint: 请使用 priority queue来找出top results. 思考为什么可以这么做? \n",
" \n",
" return alist[top_idxs] # 返回相似度最高的问题对应的答案,作为TOP5答案 "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# TODO: 编写几个测试用例,并输出结果\n",
"print (get_top_results_tfidf_noindex(\"\"))\n",
"print (get_top_results_tfidf_noindex(\"\"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"你会发现上述的程序很慢,没错! 是因为循环了所有库里的问题。为了优化这个过程,我们需要使用一种数据结构叫做```倒排表```。 使用倒排表我们可以把单词和出现这个单词的文档做关键。 之后假如要搜索包含某一个单词的文档,即可以非常快速的找出这些文档。 在这个QA系统上,我们首先使用倒排表来快速查找包含至少一个单词的文档,然后再进行余弦相似度的计算,即可以大大减少```时间复杂度```。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 3.2 倒排表的创建\n",
"倒排表的创建其实很简单,最简单的方法就是循环所有的单词一遍,然后记录每一个单词所出现的文档,然后把这些文档的ID保存成list即可。我们可以定义一个类似于```hash_map```, 比如 ``inverted_index = {}``, 然后存放包含每一个关键词的文档出现在了什么位置,也就是,通过关键词的搜索首先来判断包含这些关键词的文档(比如出现至少一个),然后对于candidates问题做相似度比较。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# TODO 请创建倒排表\n",
"inverted_idx = {} # 定一个一个简单的倒排表,是一个map结构。 循环所有qlist一遍就可以"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 3.3 语义相似度\n",
"这里有一个问题还需要解决,就是语义的相似度。可以这么理解: 两个单词比如car, auto这两个单词长得不一样,但从语义上还是类似的。如果只是使用倒排表我们不能考虑到这些单词之间的相似度,这就导致如果我们搜索句子里包含了``car``, 则我们没法获取到包含auto的所有的文档。所以我们希望把这些信息也存下来。那这个问题如何解决呢? 其实也不难,可以提前构建好相似度的关系,比如对于``car``这个单词,一开始就找好跟它意思上比较类似的单词比如top 10,这些都标记为``related words``。所以最后我们就可以创建一个保存``related words``的一个``map``. 比如调用``related_words['car']``即可以调取出跟``car``意思上相近的TOP 10的单词。 \n",
"\n",
"那这个``related_words``又如何构建呢? 在这里我们仍然使用``Glove``向量,然后计算一下俩俩的相似度(余弦相似度)。之后对于每一个词,存储跟它最相近的top 10单词,最终结果保存在``related_words``里面。 这个计算需要发生在离线,因为计算量很大,复杂度为``O(V*V)``, V是单词的总数。 \n",
"\n",
"这个计算过程的代码请放在``related.py``的文件里,然后结果保存在``related_words.txt``里。 我们在使用的时候直接从文件里读取就可以了,不用再重复计算。所以在此notebook里我们就直接读取已经计算好的结果。 作业提交时需要提交``related.py``和``related_words.txt``文件,这样在使用的时候就不再需要做这方面的计算了。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# TODO 读取语义相关的单词\n",
"def get_related_words(file):\n",
" \n",
" return related_words\n",
"\n",
"related_words = get_related_words('related_words.txt') # 直接放在文件夹的根目录下,不要修改此路径。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 3.4 利用倒排表搜索\n",
"在这里,我们使用倒排表先获得一批候选问题,然后再通过余弦相似度做精准匹配,这样一来可以节省大量的时间。搜索过程分成两步:\n",
"\n",
"- 使用倒排表把候选问题全部提取出来。首先,对输入的新问题做分词等必要的预处理工作,然后对于句子里的每一个单词,从``related_words``里提取出跟它意思相近的top 10单词, 然后根据这些top词从倒排表里提取相关的文档,把所有的文档返回。 这部分可以放在下面的函数当中,也可以放在外部。\n",
"- 然后针对于这些文档做余弦相似度的计算,最后排序并选出最好的答案。\n",
"\n",
"可以适当定义自定义函数,使得减少重复性代码"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def get_top_results_tfidf(query):\n",
" \"\"\"\n",
" 给定用户输入的问题 query, 返回最有可能的TOP 5问题。这里面需要做到以下几点:\n",
" 1. 利用倒排表来筛选 candidate (需要使用related_words). \n",
" 2. 对于候选文档,计算跟输入问题之间的相似度\n",
" 3. 找出相似度最高的top5问题的答案\n",
" \"\"\"\n",
" \n",
" top_idxs = [] # top_idxs存放相似度最高的(存在qlist里的)问题的下表 \n",
" # hint: 利用priority queue来找出top results. 思考为什么可以这么做? \n",
" \n",
" return alist[top_idxs] # 返回相似度最高的问题对应的答案,作为TOP5答案"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def get_top_results_w2v(query):\n",
" \"\"\"\n",
" 给定用户输入的问题 query, 返回最有可能的TOP 5问题。这里面需要做到以下几点:\n",
" 1. 利用倒排表来筛选 candidate (需要使用related_words). \n",
" 2. 对于候选文档,计算跟输入问题之间的相似度\n",
" 3. 找出相似度最高的top5问题的答案\n",
" \"\"\"\n",
" \n",
" top_idxs = [] # top_idxs存放相似度最高的(存在qlist里的)问题的下表 \n",
" # hint: 利用priority queue来找出top results. 思考为什么可以这么做? \n",
" \n",
" return alist[top_idxs] # 返回相似度最高的问题对应的答案,作为TOP5答案"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def get_top_results_bert(query):\n",
" \"\"\"\n",
" 给定用户输入的问题 query, 返回最有可能的TOP 5问题。这里面需要做到以下几点:\n",
" 1. 利用倒排表来筛选 candidate (需要使用related_words). \n",
" 2. 对于候选文档,计算跟输入问题之间的相似度\n",
" 3. 找出相似度最高的top5问题的答案\n",
" \"\"\"\n",
" \n",
" top_idxs = [] # top_idxs存放相似度最高的(存在qlist里的)问题的下表 \n",
" # hint: 利用priority queue来找出top results. 思考为什么可以这么做? \n",
" \n",
" return alist[top_idxs] # 返回相似度最高的问题对应的答案,作为TOP5答案"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# TODO: 编写几个测试用例,并输出结果\n",
"\n",
"test_query1 = \"\"\n",
"test_query2 = \"\"\n",
"\n",
"print (get_top_results_tfidf(test_query1))\n",
"print (get_top_results_w2v(test_query1))\n",
"print (get_top_results_bert(test_query1))\n",
"\n",
"print (get_top_results_tfidf(test_query2))\n",
"print (get_top_results_w2v(test_query2))\n",
"print (get_top_results_bert(test_query2))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 4. 拼写纠错\n",
"其实用户在输入问题的时候,不能期待他一定会输入正确,有可能输入的单词的拼写错误的。这个时候我们需要后台及时捕获拼写错误,并进行纠正,然后再通过修正之后的结果再跟库里的问题做匹配。这里我们需要实现一个简单的拼写纠错的代码,然后自动去修复错误的单词。\n",
"\n",
"这里使用的拼写纠错方法是课程里讲过的方法,就是使用noisy channel model。 我们回想一下它的表示:\n",
"\n",
"$c^* = \\text{argmax}_{c\\in candidates} ~~p(c|s) = \\text{argmax}_{c\\in candidates} ~~p(s|c)p(c)$\n",
"\n",
"这里的```candidates```指的是针对于错误的单词的候选集,这部分我们可以假定是通过edit_distance来获取的(比如生成跟当前的词距离为1/2的所有的valid 单词。 valid单词可以定义为存在词典里的单词。 ```c```代表的是正确的单词, ```s```代表的是用户错误拼写的单词。 所以我们的目的是要寻找出在``candidates``里让上述概率最大的正确写法``c``。 \n",
"\n",
"$p(s|c)$,这个概率我们可以通过历史数据来获得,也就是对于一个正确的单词$c$, 有百分之多少人把它写成了错误的形式1,形式2... 这部分的数据可以从``spell_errors.txt``里面找得到。但在这个文件里,我们并没有标记这个概率,所以可以使用uniform probability来表示。这个也叫做channel probability。\n",
"\n",
"$p(c)$,这一项代表的是语言模型,也就是假如我们把错误的$s$,改造成了$c$, 把它加入到当前的语句之后有多通顺?在本次项目里我们使用bigram来评估这个概率。 举个例子: 假如有两个候选 $c_1, c_2$, 然后我们希望分别计算出这个语言模型的概率。 由于我们使用的是``bigram``, 我们需要计算出两个概率,分别是当前词前面和后面词的``bigram``概率。 用一个例子来表示:\n",
"\n",
"给定: ``We are go to school tomorrow``, 对于这句话我们希望把中间的``go``替换成正确的形式,假如候选集里有个,分别是``going``, ``went``, 这时候我们分别对这俩计算如下的概率:\n",
"$p(going|are)p(to|going)$和 $p(went|are)p(to|went)$, 然后把这个概率当做是$p(c)$的概率。 然后再跟``channel probability``结合给出最终的概率大小。\n",
"\n",
"那这里的$p(are|going)$这些bigram概率又如何计算呢?答案是训练一个语言模型! 但训练一个语言模型需要一些文本数据,这个数据怎么找? 在这次项目作业里我们会用到``nltk``自带的``reuters``的文本类数据来训练一个语言模型。当然,如果你有资源你也可以尝试其他更大的数据。最终目的就是计算出``bigram``概率。 "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 4.1 训练一个语言模型\n",
"在这里,我们使用``nltk``自带的``reuters``数据来训练一个语言模型。 使用``add-one smoothing``"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from nltk.corpus import reuters\n",
"\n",
"# 读取语料库的数据\n",
"categories = reuters.categories()\n",
"corpus = reuters.sents(categories=categories)\n",
"\n",
"# 循环所有的语料库并构建bigram probability. bigram[word1][word2]: 在word1出现的情况下下一个是word2的概率。 \n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 4.2 构建Channel Probs\n",
"基于``spell_errors.txt``文件构建``channel probability``, 其中$channel[c][s]$表示正确的单词$c$被写错成$s$的概率。 "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# TODO 构建channel probability \n",
"channel = {}\n",
"\n",
"for line in open('spell-errors.txt'):\n",
" # TODO\n",
"\n",
"# TODO\n",
"\n",
"print(channel) "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 4.3 根据错别字生成所有候选集合\n",
"给定一个错误的单词,首先生成跟这个单词距离为1或者2的所有的候选集合。 这部分的代码我们在课程上也讲过,可以参考一下。 "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def generate_candidates(word):\n",
" # 基于拼写错误的单词,生成跟它的编辑距离为1或者2的单词,并通过词典库的过滤。\n",
" # 只留写法上正确的单词。 \n",
" \n",
" \n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 4.4 给定一个输入,如果有错误需要纠正\n",
"\n",
"给定一个输入``query``, 如果这里有些单词是拼错的,就需要把它纠正过来。这部分的实现可以简单一点: 对于``query``分词,然后把分词后的每一个单词在词库里面搜一下,假设搜不到的话可以认为是拼写错误的! 人如果拼写错误了再通过``channel``和``bigram``来计算最适合的候选。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"\n",
"def spell_corrector(line):\n",
" # 1. 首先做分词,然后把``line``表示成``tokens``\n",
" # 2. 循环每一token, 然后判断是否存在词库里。如果不存在就意味着是拼写错误的,需要修正。 \n",
" # 修正的过程就使用上述提到的``noisy channel model``, 然后从而找出最好的修正之后的结果。 \n",
" \n",
" return newline # 修正之后的结果,假如用户输入没有问题,那这时候``newline = line``\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 4.5 基于拼写纠错算法,实现用户输入自动矫正\n",
"首先有了用户的输入``query``, 然后做必要的处理把句子转换成tokens的形状,然后对于每一个token比较是否是valid, 如果不是的话就进行下面的修正过程。 "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"test_query1 = \"\" # 拼写错误的\n",
"test_query2 = \"\" # 拼写错误的\n",
"\n",
"test_query1 = spell_corector(test_query1)\n",
"test_query2 = spell_corector(test_query2)\n",
"\n",
"print (get_top_results_tfidf(test_query1))\n",
"print (get_top_results_w2v(test_query1))\n",
"print (get_top_results_bert(test_query1))\n",
"\n",
"print (get_top_results_tfidf(test_query2))\n",
"print (get_top_results_w2v(test_query2))\n",
"print (get_top_results_bert(test_query2))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 附录 \n",
"在本次项目中我们实现了一个简易的问答系统。基于这个项目,我们其实可以有很多方面的延伸。\n",
"- 在这里,我们使用文本向量之间的余弦相似度作为了一个标准。但实际上,我们也可以基于基于包含关键词的情况来给一定的权重。比如一个单词跟related word有多相似,越相似就意味着相似度更高,权重也会更大。 \n",
"- 另外 ,除了根据词向量去寻找``related words``也可以提前定义好同义词库,但这个需要大量的人力成本。 \n",
"- 在这里,我们直接返回了问题的答案。 但在理想情况下,我们还是希望通过问题的种类来返回最合适的答案。 比如一个用户问:“明天北京的天气是多少?”, 那这个问题的答案其实是一个具体的温度(其实也叫做实体),所以需要在答案的基础上做进一步的抽取。这项技术其实是跟信息抽取相关的。 \n",
"- 对于词向量,我们只是使用了``average pooling``, 除了average pooling,我们也还有其他的经典的方法直接去学出一个句子的向量。\n",
"- 短文的相似度分析一直是业界和学术界一个具有挑战性的问题。在这里我们使用尽可能多的同义词来提升系统的性能。但除了这种简单的方法,可以尝试其他的方法比如WMD,或者适当结合parsing相关的知识点。 "
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": true
},
"source": [
"好了,祝你好运! "
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.0"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
{
{
......@@ -103,10 +103,33 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 34,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"q 86821 a 86821\n"
]
},
{
"data": {
"text/plain": [
"['When did Beyonce start becoming popular?',\n",
" 'What areas did Beyonce compete in when she was growing up?',\n",
" \"When did Beyonce leave Destiny's Child and become a solo singer?\"]"
]
},
"execution_count": 34,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import json\n",
"file_path = 'train-v2.0.json'\n",
"\n",
"def read_corpus():\n",
" \"\"\"\n",
" 读取给定的语料库,并把问题列表和答案列表分别写入到 qlist, alist 里面。 在此过程中,不用对字符换做任何的处理(这部分需要在 Part 2.3里处理)\n",
......@@ -115,11 +138,27 @@
" 务必要让每一个问题和答案对应起来(下标位置一致)\n",
" \"\"\"\n",
" # TODO 需要完成的代码部分 ...\n",
" with open(file_path,encoding='utf-8') as f:\n",
" corpus = json.load(f) \n",
" \n",
" qlist=[]\n",
" alist=[]\n",
" for item in corpus['data']:\n",
" for para in item['paragraphs']:\n",
" for qas in para['qas']:\n",
" \n",
" qlist.append(qas['question'])\n",
" try:\n",
" alist.append(qas['answers'][0]['text'])\n",
" except IndexError:\n",
" qlist.pop()\n",
" \n",
" print('q',len(qlist),'a',len(alist))\n",
" assert len(qlist) == len(alist) # 确保长度一样\n",
" return qlist, alist"
" return qlist, alist\n",
"\n",
"(qlist, alist) = read_corpus()\n",
"qlist[:3]"
]
},
{
......@@ -135,37 +174,118 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 2,
"metadata": {},
"outputs": [],
"outputs": [
{
"data": {
"text/plain": [
"<Figure size 640x480 with 4 Axes>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"词数(type):48180\n",
"[['the', 60960], ['What', 37006], ['of', 33992], ['', 30109], ['in', 21795], ['to', 18448], ['was', 17067], ['is', 16200], ['did', 15634], ['what', 13239], ['a', 10753], ['How', 8025], ['Who', 8023], ['and', 7229], ['for', 7208], ['many', 5498], ['are', 5457], ['When', 5368], ['that', 4438], ['were', 4429]]\n"
]
}
],
"source": [
"# TODO: 统计一下在qlist中总共出现了多少个单词? 总共出现了多少个不同的单词(unique word)?\n",
"# 这里需要做简单的分词,对于英文我们根据空格来分词即可,其他过滤暂不考虑(只需分词)\n",
"\n",
"print (word_total)"
"from collections import Counter\n",
"import matplotlib.pyplot as plt\n",
"\n",
"words_cnt = Counter()\n",
"\n",
"for text in qlist:\n",
" words_cnt.update(text.replace('.','').replace('?','').replace('!','').replace(',','').split(' '))\n",
"\n",
"value_sort = sorted(words_cnt.values(), reverse=True) \n",
"\n",
"plt.subplot(221)\n",
"plt.plot(value_sort)\n",
"plt.subplot(222)\n",
"plt.plot(value_sort[:2000])\n",
"plt.subplot(223)\n",
"plt.plot(value_sort[:200])\n",
"plt.subplot(224)\n",
"plt.plot(value_sort[:20])\n",
"plt.show()\n",
"\n",
"# 显示词频最高前10词,因为只取高频值,所以value转换时重合的概率较小,即时重合也没有太大影响\n",
"inverse = dict(zip(words_cnt.values(), words_cnt.keys()))\n",
"print(\"词数(type):%d\" % len(words_cnt))\n",
"print([[inverse[v], v] for v in value_sort[:20]])"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"# TODO: 统计一下qlist中出现1次,2次,3次... 出现的单词个数, 然后画一个plot. 这里的x轴是单词出现的次数(1,2,3,..), y轴是单词个数。\n",
"# 从左到右分别是 出现1次的单词数,出现2次的单词数,出现3次的单词数... \n",
"\n"
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"488\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"data": {
"text/plain": [
"[<matplotlib.lines.Line2D at 0x266246a7808>]"
]
},
"execution_count": 3,
"metadata": {},
"outputs": [],
"output_type": "execute_result"
},
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAYoAAAD4CAYAAADy46FuAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAgAElEQVR4nO3de3xcVbnw8d8zl9zbpGnTtE0LKaXQlltbIhSrCFSkXI7Vo3hQFOTFU4/AefGoR4qe4wX1vHC8oKio5SKgKCIXqdzLpQIKpWmhd0rTC22atknaJM09mZnn/WOvJJNkMk3STDJJnu/nM5+999pr71l7JTPPrLX3XltUFWOMMaYnvqEugDHGmORmgcIYY0xcFiiMMcbEZYHCGGNMXBYojDHGxBUY6gL014QJE7SwsHCoi2GMMcPK2rVrK1U1ry/bDNtAUVhYSHFx8VAXwxhjhhURea+v24y6rqd1e6ooKa8b6mIYY8ywMWxbFP31z3f+A4Ddt146xCUxxpjhYdS1KIwxxvSNBQpjjDFxWaAwxhgTlwUKY4wxcVmgMMYYE5cFCmOMMXFZoDDGGBNXrwKFiOwWkY0i8raIFLu0XBFZKSLb3XScSxcRuUNESkRkg4jMj9rP1S7/dhG5Oir9TLf/EretDPSBGmOM6Z++tCjOV9W5qlrklpcBL6rqTOBFtwxwMTDTvZYCvwIvsADfBs4GzgK+3RZcXJ6lUdst7vcRGWOMGVDH0vW0BLjfzd8PfCwq/QH1vAHkiMhk4CJgpaoeVtUqYCWw2K0bq6qvq/dc1gei9mWMMWaI9TZQKPC8iKwVkaUuLV9V9wO46USXXgDsjdq21KXFSy+Nkd6NiCwVkWIRKa6oqOhl0Y0xxhyL3o71tFBVy0RkIrBSRN6JkzfW+QXtR3r3RNXlwHKAoqKimHmMMcYMrF61KFS1zE3LgcfxzjEcdN1GuGm5y14KTIvafCpQdpT0qTHSjTHGJIGjBgoRyRSRMW3zwEeATcAKoO3KpauBJ9z8CuAqd/XTAqDGdU09B3xERMa5k9gfAZ5z62pFZIG72umqqH0ZY4wZYr3pesoHHndXrAaAP6jqsyKyBnhYRK4F9gCXu/xPA5cAJUADcA2Aqh4Wke8Ba1y+W1T1sJv/EnAfkA48417GGGOSwFEDharuBM6IkX4IWBQjXYHre9jXvcC9MdKLgVN7UV5jjDGDzO7MNsYYE5cFCmOMMXFZoDDGGBOXBQpjjDFxWaAwxhgTlwUKY4wxcVmgMMYYE5cFCmOMMXFZoDDGGBOXBQpjjDFxWaAwxhgTlwUKY4wxcVmgMMYYE5cFCmOMMXH15sFF00TkZRHZKiKbReRGl/4dEdknIm+71yVR29wsIiUisk1ELopKX+zSSkRkWVT6dBFZLSLbReRPIpIy0AdqjDGmf3rToggBX1XV2cAC4HoRmePW3a6qc93raQC37grgFGAxcKeI+EXED/wSuBiYA3w6aj+3uX3NBKqAawfo+IwxxhyjowYKVd2vquvcfC2wFSiIs8kS4CFVbVbVXXhPujvLvUpUdaeqtgAPAUvc408vAB5x298PfKy/B2SMMWZg9ekchYgUAvOA1S7pBhHZICL3uudggxdE9kZtVurSekofD1SraqhLujHGmCTQ60AhIlnAo8CXVfUI8CtgBjAX2A/8uC1rjM21H+mxyrBURIpFpLiioqK3RTfGGHMMehUoRCSIFyQeVNXHAFT1oKqGVTUC3IXXtQRei2Ba1OZTgbI46ZVAjogEuqR3o6rLVbVIVYvy8vJ6U3RjjDHHqDdXPQlwD7BVVX8SlT45KtvHgU1ufgVwhYikish0YCbwJrAGmOmucErBO+G9QlUVeBn4pNv+auCJYzssY4wxAyVw9CwsBD4HbBSRt13aN/CuWpqL1020G/gigKpuFpGHgS14V0xdr6phABG5AXgO8AP3qupmt7+bgIdE5PvAW3iByRhjTBI4aqBQ1deIfR7h6Tjb/AD4QYz0p2Ntp6o76ei6MsYYk0TszmxjjDFxWaAwxhgTlwUKY4wxcVmgMMYYE5cFCmOMMXFZoDDGGBOXBQpjjDFxWaAwxhgTlwUKY4wxcVmgMMYYE5cFCmOMMXFZoDDGGBOXBQpjjDFxWaAwxhgTV2+eRzGinHn8OFrDkaEuhjHGDBtJ06IQkcUisk1ESkRkWaLeJy3oI8WfNIdtjDFJLym+MUXED/wSuBiYg/f0vDmJej9N1I6NMWYESpaup7OAEvekO0TkIWAJ3uNUB1Tx7iqaQxEeLt6LXwSfDyTmA/zik75vYowxA+Ky06fg9w3el1CyBIoCYG/UcilwdtdMIrIUWApw3HHH9euNmkPe+YmvP7KhX9sbY8xQu+iUSfh9/kF7v2QJFLFCY7ceIlVdDiwHKCoq6lcP0s7/uYSKumZaQhEiqkT6sRfVvm9k3V3GmIEy2OdZkyVQlALTopanAmWJeCOfT8gfm5aIXRtjzIgk/fl1POCFEAkA7wKLgH3AGuAzqro5zjYVwHv9fMsJQGU/tx0prA6sDsDqAEZfHRyvqnl92SApWhSqGhKRG4DnAD9wb7wg4bbp04FGE5FiVS3q7/YjgdWB1QFYHYDVQW8kRaAAUNWngaeHuhzGGGM6S4r7KIwxxiSv0Roolg91AZKA1YHVAVgdgNXBUSXFyWxjjDHJa7S2KIwxxvSSBQpjjDFxjapAMVgj1A4VEdktIhtF5G0RKXZpuSKyUkS2u+k4ly4icoeriw0iMj9qP1e7/NtF5OqhOp7eEpF7RaRcRDZFpQ3YcYvIma5eS9y2STfSVw918B0R2ef+H94WkUui1t3sjmebiFwUlR7zMyIi00VktaubP4lIyuAdXe+IyDQReVlEtorIZhG50aWPqv+FhFDVUfHCuz9jB3ACkAKsB+YMdbkG+Bh3AxO6pP0vsMzNLwNuc/OXAM/gDZ+yAFjt0nOBnW46zs2PG+pjO8pxnwvMBzYl4riBN4Fz3DbPABcP9TH3sg6+A3wtRt457v8/FZjuPhf+eJ8R4GHgCjf/a+BLQ33MMY5rMjDfzY/Bu4l3zmj7X0jEK2EtChHxi8hbIvKkW475i0REUt1yiVtfmKAitY9Qq6otQNsItSPdEuB+N38/8LGo9AfU8waQIyKTgYuAlap6WFWrgJXA4sEudF+o6ivA4S7JA3Lcbt1YVX1dvW+KB6L2lTR6qIOeLAEeUtVmVd0FlOB9PmJ+Rtyv5guAR9z20fWZNFR1v6quc/O1wFa8AUdH1f9CIiSy6+lGvD9Um9uA21V1JlAFXOvSrwWqVPVE4HaXLxFijVBbkKD3GioKPC8ia8UbaRcgX1X3g/dBAia69J7qY6TU00Add4Gb75o+XNzgulXubetyoe91MB6oVtVQl/Sk5X5wzgNWY/8LxywhgUJEpgKXAne75Xi/SKKj/SPAogT1+/VqhNphbqGqzsd7ANT1InJunLw91cdIr6e+Hvdwro9fATOAucB+4McufUTXgYhkAY8CX1bVI/GyxkgbMfUwkBJyH4WIPAL8P7x+wq8BnwfecK0GRGQa8IyqnupOvi1W1VK3bgdwtqp2G6RLop5HkZmZeWbrmMmd1p9WkD3gx2KMMSPJ2rVrK3WoBwUUkcuAclVdKyLntSXHyKq9WNc5scvzKCo//N1O64tvvbQ/RTbGmFFDRPo86nYiBgVcCHzUXYqXBowFfop3oijg+jmjnzfR9iyKUvGGG8+m9yfljDHGJNiAn6NQ1ZtVdaqqFgJXAC+p6pXAy8AnXbargSfc/Aq3jFv/kiaiP8wYY0y/DOYNdzcBXxGREryrKO5x6fcA4136V/CuczbGGJMkEvo8ClVdBaxy8zvxrtPumqcJuDyR5TDGGNN/o2oID2OMMX1ngcIYY0xcFiiMMcbEZYHCGGNMXBYojDHGxDWiAoWqsq+6caiLYYwxI8qIChQ/fG4bC299iZLy2va0PYcaqGloHcJSGWPM8DaiAsWW/d5AkXsPd7Qqzv3hy5xxy/NDVSRjjBn2RlSgMMYYM/BGZKBoaAlTuOwpXthycKiLYowxw96IDBQb99UAsPyVnUNcEmOMGf5GZKAwxhgzcCxQGGOMicsChTHGmLgsUBhjjInLAoUxxpi4LFAYY4yJywKFMcaYuCxQGGOMicsChTHGmLgsUBhjjInLAoUxxpi4LFAYY4yJKyGBQkSmicjLIrJVRDaLyI0uPVdEVorIdjcd59JFRO4QkRIR2SAi8xNRLmOMMX2XqBZFCPiqqs4GFgDXi8gcYBnwoqrOBF50ywAXAzPdaynwqwSVyxhjTB8lJFCo6n5VXefma4GtQAGwBLjfZbsf+JibXwI8oJ43gBwRmdzf928Ohb0Z6e8ejDHGtEn4OQoRKQTmAauBfFXdD14wASa6bAXA3qjNSl1a130tFZFiESmuqKjo8T2f3XQAgKLjxx37ARhjzCiX0EAhIlnAo8CXVfVIvKwx0rRbgupyVS1S1aK8vLwed7a/pgmAMWnB6G1pDUd6WXJjjDFtEhYoRCSIFyQeVNXHXPLBti4lNy136aXAtKjNpwJlx1qGzWXek+6Oy83gB09t5cq7Vh/rLo0xZtRJ1FVPAtwDbFXVn0StWgFc7eavBp6ISr/KXf20AKhp66I6FlvKvEbMnMljOXCkiTXvHebe13ZRuOwpWkLWujDGmN5IVItiIfA54AIRedu9LgFuBS4Uke3AhW4Z4GlgJ1AC3AVc15s3+cO/ns1T//cD7csVtc0AZKT4Adh1qB6A7HSvC0oVbnlyCwDvHqzlm49v5FBd8zEcpjHGjHyBROxUVV+j52uOFsXIr8D1fX2f98+Y0Gl5s2tBfPSMKTy0Zi8adZbjzV2HO+V9YetBHly9h/NOnsiWsiNcdGo+syaN7WsRjDFmxBuRd2ZnpnaOf82hMOW1nVsOr22vBGBjaTW3v/Aui3/66qCVzxhjhpMRGSi6+svb3c+LF79XBcAdL5UMdnGMMWZYGZGBIuCzO+2MMWagjKhA0XaD3dRx6QCMz0w56jb/PL/jvr4533qWwmVPJaZwxhgzTI2oQJGV5s5NiNeimDPl6Ceng76OKmho8Yb+uOmRDRQuewrVbvf8GWPMqDOiAkVXp0zJ7td2fyr2RhNpGwrEGGNGsxEZKHIzUpiRl8mHZ0/kt9e8j5e++qFueeZMPnpr48aH3mbdnqpEFNEYY4aNEREoHvm3c7hp8az25YwUPy9+9TyKCnM5/+SJnJCX1W2b06cevbWRn53KF+4v5nB9C5f87FXW7D581G2MMWakGRGBoqgwly+dN6NXeadPyAQ67taO578vncPh+hbe2lPFlv1H+OLv1h5TOY0xZjgaEYGizecWHA/AvONyBmR/6W4okLrmEADhiFJe20RVfcuA7N8YY4aDhAzhMVQWzc5n962Xxlz3/H+cS2VtM2U1TXztz+v5+PwCfvPKTi6ck99+8rrNpu9eRF1TiNSAj/GZKfzvs9va1531gxcBeOd7i6lqaGFytncpbl1ziFO//Rw//OTpZKQEmJSdxpn2PAxjzAgwogJFPCflj+Gk/DEAfPLMqQA9BpWs1ABZbhiQH33qDK757RoAahpb2/PMveV5mlojbL1lMQAl5XUA/O6N9zhc30J1Qyt/uf79nDhxTGIOyBhjBsmI6npKhPNPnsjSc0/olt7U6g1TPvtbzzL7W89ywD0saUNpDaVVjdQ1h/j8b9ewr7ox5n5febeC13ccsns1jDFJb9S0KI7m8+8v5LrzZxDre/trHzmZSESZkpPePkz5krlTmJKTzq9W7QCgOMYVUaVVjSy89SU2f/eiTgMV1jWHuOreNwE4OX8MnzizgPnHjePUgmzSgv4EHJ0xxvSfDNdftEVFRVpcXDzo73vpHa9yqK6FlV85l7Sgn5nffKbT+psWz+K2Z98B4LrzZnDnqh188dwTuPmS2TS0hLjuwXX8vaSS1rBSOD6DjJQAW/Z7w6MHfMKsyWOYf9w4Ljt9Cu8rHIdI/8atOtLUysNr9nLNwun4bewrY4wjImtVtagv21iLoo/uuqqI1ICv/Xncb9y8iNuefYeX3imnprGVnIwg3/6nOcyaNJZzZoznUF0L97y2i0+cOZWt+4+walsF55+cx9JzZ3DW9Fz8PqG8ton1e2t4e28Vb+2p5pG1pTzw+nsUjs/g8qJpfGL+VCZlp8UtV9sYVbtvvRRV5fTvPA/AI2tLuffz72NKTnpiK8YYM2JZi2KArNtTxTW/XcMdn57Hh07Ka08/XN/CBT9eRcDno9I9Te/ri0/muvNO7HFfDS0hnt54gD8X72X1rsP4BM49KY9PFU1j0eyJpAa87qnmUJglv/g7+WPT+Nu7FQA8/MVzeHV7BT+PGj59bFqAWz9xOpecNjkRh26MGUb606KwQDGAIhHFF6Ob57F1pXzvyS185uzjWDhjAufMGN/rLqXdlfU8sraUR9eVsr+miZSAj+NzMyickEnemFT+sHoPKQFft2eApwf9NLaGeebGD7Ls0Q2sL60hKzXAVeccz+TsNHIyUsjJCJKT7qYZQTJTAjHLP9CaWsP8ac1ennh7H1+76ORuTyo0xiSOBYoRLBxRXiup5O8lleyqrGd3ZT3vHW6gJRThzivnU5CTTm5mCr9f/R7V9a18/+OnEvR7F7W1hiP89IV3+eXLO476Phkp/vbLgzNTA2Sm+qPmXXpKR3pWmpce8An1zWEaWkLUt4Spbw7R0OzNpwR8jMsIkjcmlWc3HeC5zQcB7zLklnCEX3x6Hh85ZVJC688Y47FAMcpEIkpNYyvjevHcDYCS8lpyM1MJRSLUNLRS3dhKVX0L1Y2t1DS0Utccoq45RH3UtL457M23eMu1TSGau7Re4slI8dMSihCKdP4/u/fzRcybNo7P37eGTftqODEvi6qGFiKqZKS4wJTiJ8NNs1IDnDgxi7nTcsgbk0ooorSGI4TCSigSoSXkTUNhl+7Wt4aVUDhCa0RpDUWIqDIhK5VJ2WlMzk4jf2yaXWlmRhULFGZQhMIRL4C0dASVcETbWxoZbpoW8OPzCapKXXOIg0ea2F3ZQDDgaz+PU98c4gdPb+VQXTM56Sn4fOK1SqJaJw3NIWoaW7s993ygjEkLkJMRZFxGCtnpQXIyUhiXESQnPUi2m08P+hERfAI+EXw+3HJHmkBHHp837ZZHYHPZEd47VE9DS5gGV49tra/GljDpQT/Z7v3HZXplGtfeVRjs6DaMKldvtYYjVNW3UFnXwqH6Zg7VtVBZ1+wt1zXTGo4Q8PsI+oWAz0ewbd4tpwR8BHzSnifo95aDfh8Bt9y2bcdy5zwpburtP2pbn29Quj5HOwsUZkSrrGtm/d5qjjS1dvkS8xH0CcFAjC8tX+cvMEGoqGvm4JEm9tc0caCmkcq6FmoaW6luaKGqobV9vqaxlUgCPx5j0rxuvIxUf3uQbTu3VN3Q6r0aW9pv7oxFBNICftJT/KQFfKQF/aQG/aQFfaQFvGl9S5hDdc0cciMGxBL0C7mZKaQG/O0tsJBrsbW4Flo4kZXh+AQCfl/cYBIrILXnd/8LAb+vU55O+4gKXG3/P373gyYcgbAqEXe8EdX2Y49EtGOdy9sSitAUCtPU2vaKdJpvDoXJTg9SMC6DKTlpTM1JZ0pOOgXj0snLSqWhJUxNYyuq4PNBwOeVJeATb+oX/OLmfT78/o51fpF+BVa7PNaMaBOyUlk0O/+Y95OdEeTEid2Hnu8qElFqm0JUNbTQ7LqtIqqo4uZxy24+4k01al1P+U+Zks203IxelbepLXA0trgA4k2rGlppaAl1+oJqjPqCamoNU1kXIj3Fz8mTxjA+M5XxWSmMz0plQqY3HZ+VwoSsVMamBY7aMolEtFOXX2tUV1/M7r5wR3dgS1Q3YWtb92BUno40b7+toY70kNt3i9vGy+N1JYYiERpbXd5QR5nauxvDnbsjByrYtX1RpwR8pAV9pLqgnBb0kxb0AnduZgopAR9V9a1sKK3m2U2NtIYHNtiK0BFUooKMLzrYdFnfH0kTKERkMfAzwA/craq3DnGRzCjn8wnZGUGyM44+JH0ipQX9TMr2H/VemkTz+YQUn/flOFxFItHBpK215C372n+l0/4rvu0L1yfH9iu+7b0r65oprW6krLqRitpmslIDZKcH8Ym4VkpbC8YrU7i99aLty+3rXSunLQB2TCPedl3XhbXbucLeSoquJxHxA+8CFwKlwBrg06q6padtrOvJGGP6rj9dT8ny0+AsoERVd6pqC/AQsGSIy2SMMYbk6XoqAKIfClEKnN01k4gsBZa6xWYR2TQIZRsoE4DKoS5EH1mZE2+4lReszIMhkeU9vq8bJEugiNXp161PTFWXA8sBRKS4r82noTTcygtW5sEw3MoLVubBkGzlTZaup1JgWtTyVKBsiMpijDEmSrIEijXATBGZLiIpwBXAiiEukzHGGJKk60lVQyJyA/Ac3uWx96rq5qNstjzxJRtQw628YGUeDMOtvGBlHgxJVd6kuDzWGGNM8kqWridjjDFJygKFMcaYuIZdoBCRxSKyTURKRGTZUJcnFhGZJiIvi8hWEdksIje69FwRWSki29103FCXNZqI+EXkLRF50i1PF5HVrrx/chcaJA0RyRGRR0TkHVfX5wyDOv4P9z+xSUT+KCJpyVbPInKviJRH36fUU72K5w73edwgIvOTpLw/dP8XG0TkcRHJiVp3syvvNhG5aLDL21OZo9Z9TURURCa45SGv42EVKNxQH78ELgbmAJ8WkTlDW6qYQsBXVXU2sAC43pVzGfCiqs4EXnTLyeRGYGvU8m3A7a68VcC1Q1Kqnv0MeFZVZwFn4JU9aetYRAqA/wsUqeqpeBduXEHy1fN9wOIuaT3V68XATPdaCvxqkMoY7T66l3clcKqqno43PNDNAO5zeAVwitvmTve9Mtjuo3uZEZFpeEMZ7YlKHvI6HlaBgmEy1Ieq7lfVdW6+Fu8LrACvrPe7bPcDHxuaEnYnIlOBS4G73bIAFwCPuCzJVt6xwLnAPQCq2qKq1SRxHTsBIF1EAkAGsJ8kq2dVfQU43CW5p3pdAjygnjeAHBEZ1Iezxyqvqj6vqiG3+AbevVnglfchVW1W1V1ACd73yqDqoY4Bbge+Tucbjoe8jodboIg11EfBEJWlV0SkEJgHrAbyVXU/eMEEmDh0Jevmp3j/oG0PPxgPVEd92JKtrk8AKoDfuu6yu0UkkySuY1XdB/wI79fifqAGWEty13Obnup1OHwm/w/wjJtP2vKKyEeBfaq6vsuqIS/zcAsUvRrqI1mISBbwKPBlVT0y1OXpiYhcBpSr6tro5BhZk6muA8B84FeqOg+oJ4m6mWJx/fpLgOnAFCATr1uhq2Sq56NJ6v8TEfkmXlfwg21JMbINeXlFJAP4JvCtWKtjpA1qmYfVfRQicg7wHVW9aMKECVpYWNhp/cZ9Ne3zpxVkx1zXNd0YY0aTtWvXVqpqXl+2SYo7s/ugfaiPM888k67Poyhc9lT7fPGtl8Zc1zXdGGNGExF5r6/bDKuuJ9eP2zbUhzHGmEEwrAIFgKo+raonDXU5jDFmtBh2gcIYY8zgskBhjDEmLgsUxhhj4rJAYYwxJi4LFMYYY+JKWKDo7UikIpLqlkvc+sJElckYY0zfJbJF0duRSK8FqlT1RLwBsW5LYJmMMcb0UUICRR9HIo0elfIRYJHLb4wxJgkkqkXRl5FI20dGdOtrXH5jjDFJYMADRT9GIu31yIgislREikWkuKKi4hhLaowxpjcS0aJYCHxURHbjPVjoArwWRo57WAt4DxEpc/OlwDQAtz6b2A/0QFWXq2qRqhbl5fVp8ENjjDH9NOCBQlVvVtWpqlqI98jBl1T1SuBl4JMu29XAE25+hVvGrX9Jh9PY58YYM8IN5n0UNwFfEZESvHMQ97j0e4DxLv0rJPnDZ4wxZrRJ6PMoVHUVsMrN7yTGs2lVtQm4PJHlMMYY0392Z7Yxxpi4LFAYY4yJywKFMcaYuCxQAA0tIULhyNEzGmPMKGSBApjzree44Md/69M2OyvqWLG+7OgZjTFmmEvoVU/DyZ7DDX3K3xZYPnrGlEQUxxhjkoa1KOIIhSPc+NBbRCJ2/58xZvSyQBHHr/+2gyfeLuP3q98b6qIYY8yQsUARR11z2E1DR8lpjDEjlwUKY4wxcVmgMMYYE9eoCBS7K+uHugjGGDNsjejLY/dVN/LzF7fz57WlQ10UY4wZtkZsoPj2E5v445t7AfjcguO57x+7h7ZAxhgzTI3YQPHg6j1cXjSVGy6YSUFOugUKY4zppxEbKF786oc4fnzmUBfDGGOGvRF5Mvuqc463IGGMMQMkIYFCRKaJyMsislVENovIjS49V0RWish2Nx3n0kVE7hCREhHZICLzE1EuY4wxfZeoFkUI+KqqzgYWANeLyBy852G/qKozgRfpeD72xcBM91oK/OpY3jzgG5ENJWOMGRIJ+UZV1f2qus7N1wJbgQJgCXC/y3Y/8DE3vwR4QD1vADkiMrmv7ztr0hgAPjBz/LEdgGP3XxhjzCCcoxCRQmAesBrIV9X94AUTYKLLVgDsjdqs1KV13ddSESkWkeKKiopu7zUpO83LhxxTmWsaW/nPP6/n2c0HAJgzeWyn9ao2mqwxZvRIaKAQkSzgUeDLqnokXtYYad2+jVV1uaoWqWpRXl7eQBWzk1Xbyrno9ld4dF0pZ0zNBiAztfPFYQ8X7421abtXt1ew7UBtQspnjDGDLWGBQkSCeEHiQVV9zCUfbOtSctNyl14KTIvafCow6I+P+/oj6/n8b9cwJi3A49ct5D8vmtUtz46KOr6zYgsAY9JiX138zcc38Zm73mBfdWOndFXl3//4Fg0tNhqtMWb4SNRVTwLcA2xV1Z9ErVoBXO3mrwaeiEq/yl39tACoaeui6osvfWgGAGdNz+1XuR9ZW8qXzpvBX//9A5wxLafb+paQ9yCjtKCPi0+dhE9id3GFI8qh+ha++LtimlrD7enPbznIX9eX8f2ntnbb5t2Dtfz+DXvuhTEm+SSqRbEQ+BxwgYi87V6XALcCF4rIduBCtwzwNLATKAHuAq7rz5uefcJ4dt96abeuomg3P7aRNVe3QMAAAA/XSURBVLsPxzzP8Nh1C7lp8SzSgv6Y2/74+W1s2neE2z5xOvlj02Lmqahtbm9JbC47ws2PbWx/r7aWREOX51ts2lfD5b9+nf/6yyZaw5FO6yIR5YY/rONnL2zv8ZiMMSaREnJntqq+RuzzDgCLYuRX4PpElKWrx98q5Y9v7mFabjofn1vAx+dPbV83N0Yros1r2yv5zSs7ufLs4/jIKZP4x45DndbXNrVy16u7uPvVne1pX/nwSfx45bucMmUsX/jgCTH3u7G0hs/es5qaxtb2tL2HG/h7SSWvllTyj5JKqhpagf3c+OGZ/TxqY4zpvxE7hEdPiv/rQp7ddIC/vLWPn79cwh0vlRx1m8P1Lfz3XzZx4sQs/uvSOZ3WNYfCPPjGHn7xcgmH61u49LTJPLXR6zW7/vwT2Vx2hP95eiuzJo3ttt8NpdV89u7VjEkLMndaDn97t4LzfriqvUUycUwq58+ayGPr9g3AkRtjTP+MujvTslIDfPLMqfz+C2fz+rJF3Hxx9xPWXX3vyS1UN7RyxxXzSE/xuqXCEaWmsZVFP/4btzy5hdmTx/DE9Qv55ZUdN5X7fMKPPnUGJ07M4oY/rmPPoY6T2+v3VnPl3asZmx7koaULWLP7MAAn5GXy7X+aw8r/OJfV31jETz41t8dyhSNK4bKneOXd7pcKG2PMQBl1gSLapOw0vuhOgMdTWtXITRfPYs6UjlbB79yJ55yMIL+79iwe/MKCmCfAs1IDLP9cEZGIcvsL7wKwvrSGz969mpwML0hMy80g4s5jLP9cEdcsnM7M/DFIDyfL27x70LsE93+e7n5y/O5Xd1K47Ck2lFYf9fiMMSaeUR0oeutDJ+VxzfsLO6WluxPeK67/AB+cGf+ejsIJmfz8Mx0tjV2V9YzLTOGhpecwdVxGn8sTiSjbDtTywOu7AdhzuKHT+rXvHW6/suqjv/g7F/7kb/zvs++wbk8VkYjdLGiM6ZtRd46iL04ryOaqc47n3y+Yic/X+df9W9+6kCNNrd3Se/Khk/I4OX8M21wr4KGlC5iSk96n8vz277tYvfMwq3cdcie4PQ0t3iW45bVN3PrMO53OaSyZO4WK2mZ+88pO7ly1gwlZqSyaNZHPnH1czBbQ6p2HmDV5LNnpwT6VzRgzco2aQLH71kv7vE12RpBblpwac11a0N/jZbQ9+cIHp/Ofj2zg7Om5fQ4SAN/96xamjkvngln5LDghl9aw8o3HNwJeV9NPX9hOcyjMl86bwcotBykpr+OahdOZOy2HmoZWVr1bzsotB1mxvoxtB2v5y/ULO+0/ElH+ZfkbHJebwStfP7/P5TPGjEyjJlAczbknJWZIkGgBv9f6mJwd+x6Mo3ntpvM7dVW9ur3jJPb3n9rKeSfn8a3L5nBCXhZl1Y2UlNcxPjMF8ILekrkFLJlbwLX3rWHtnip+9/puLj5tMhOyUju9z96qzl1Z4J04v+e1nSw99+jndIwxI4sFCvrX2ojn15+dH/PBSRfMygeI+WX7rx88gZ+/VEJqoOfTRl3PZ0S3aO6+qohFsye2nwD/0eVn8LkFxzMtt/s5kCvOOo69VQ389xOb+faKzSw8cQL/dPoULpzjlU8VyqobmZyd1r6/Hz+/jTtX7WD6hKz2fADVDS08uWE/qsqVZx/f6664N3cd5lO/eZ1ffGYel50+pdO6VdvK+fXfdvDBmXlcfOokTsjL6tU+jTGJIcN1JNSioiItLi4e6mIMisJlTwHdA9rW/Ue4+GevMn1CJi9/7bw+73fbgVr+ur6Mv24o471DDQR8QijqZHfemFTOmJrDGVOzeXLDfrYdrOXWfz6Nj88v4OV3ynls3T5e3lZOa9jbZkp2GkvmFfDP8wqYme8N+X7Bj1ZxxVnTugXHe1/bxS1PemNmXfuB6ZwxLYe5U3OYlpvOD5/zglKbk/PHsPjUSVxy2mROys866tVgfdHUGqaitpmJY1NJDfStK9GY4UhE1qpqUZ+2sUCR/HoKFJGIcsI3nubP/3YO7yvs3/hW4A1WuHFfDSveLuPu13aRm5nCjYtmsn5vNW+XVrOzouO5HAU56dQ1h6hpbGVCVipL5k7h0tMnU1rVyOPrSnlleyXhiHJqwVg+Pm8q33PB4M4r53PuSXlkueFVXt9xiE/f9QYAqQEfzSFv6JJxGcH2E/X/WHYBz246wLObDrDmvcO0/aueefw4vnjuCSyanY+/ly2YaM2hMNUNrVQ1tPCvDxSz97B3f8uErFQKctKYnJ3O5Jw0pmSnk5MRZExakDFpAfcKkpXqzff1HJUxycACxQh1qK6ZgM9HdsbQXIlU09jKb/62gztX7SAt6GPxKZP42LwCPnDiBAL+zl1lFbXN/HV9GY+/tY+N+2o6rUvx+zhnxng+PCef8ZkpXPfgOu675n0sPHEC2w7UsqG0hvV7q/mTG8Y9OjCW1zbx/OaDPFy8lw2l3n6PH5/B599fyOVF09oDUCyb9tXwX3/ZRGVdM1X1LdS3hLvl+Y8Pn8T+mkb2VTeyv6aJsurG9qvJepLi9zEmLUBWWxBJDZKVFmBcRpD8sWlMHJvGpLFp5I9NJX9sGuMzU7rV12Craw7xzv4jFPXhh8WhumZuenQjH5w5gaLCccycOIaUOF2kJrlZoDAJE44ob+46zGlTs+N+KUcrKa/luc0H+dBJedQ3h3hh60FWbjnI7kMdJ8vvu+Z9nHfyxE7bbdpXwzOb9scc5h2gNRzhuc0HuOe1Xby1p5oxqQH+5X3TuOqcQgJ+Yc/hBu91yJuuWO+NWH/xqZOYnJ1ObmaQnIwUcjNTyMkIcsqU7G6XA6sqR5pCHGls5UhTK7VNIWqbQtQ1d8x7r9b2aV2zl3a4voXKuma63rLiE6/Vku+Cx8SxaeSP6QgkE900NyOl1+d6YlFVVrm79fOyUpk4NpXxman4fcLFP3uVrfuPcOeV8zmtIJuCnPSY77Wjoo6VWw7ywpaDFL9X1Wld0C+clD+GU6aMZc7ksZxSkM3syWN7/X9hhpYFCpP0VNV9CZWzqayG7y05lVx3ZVZ/rNtTxb2v7eKZTQcId/lm9vuEKTlp7V1L276/eNDOQ4TCEQ7Vt3DwSBMHjzRz8EgT5W3ztd60/EgTh+pbum0b8AkTx7hAMjaV3MxUMlL8pAf9pKf4SQ34SG9bDvpJc/NpbvlQfTOfuWt1p336BHIzU6msa+6UnpniZ2b+GE7OH8PM/CzKa5t5YctBdrrHAJ8yZSzzjsvh92/sYeq4dL6+eBaby2rYUnaEzWVHOOzKLwKF4zOZM2Usp0wZyylTspk+PpPDDS0EfELALwR8PoJ+IeD3EfR504BfCPp8+H1C0C8Dev7JxGaBwoxaZdWNPLmhjMzUAMflZnB8biaTc9IIDnFXz9G0hCJU1DVzoKYtkDRxsLYtsHjTw/UtNLWGaWwNd2ulxHPT4llMn5BJRW0T5bXNVNQ2U17bTGrAxxc+OJ13D9ax7UAt7x70XpV1LQT9wjkzJnDh7Iksmp3ffr/P8ld2cPmZ0xgXFdRVlYNHmtlcVsPmsiPt09Kqxp6KdFR+nxDwCUEXRDqCS0dAaXsFfIKvbSpeHp94y/5OLx9+wZv6OqYBnw+fSKc0L2/nffm6vJdfuu7fvUTw+71pW/7odSIQiiiRiBKOKGF104gSUSUcgXAk4k01Kl9U3ogqobC2D/kTcAE36Ooq4PfqLjXgIz3oJzXY8QOjrinEP/3iNd677TILFMaMVKpKa1hpbA17gaMl3DHfnhZpf1jWpadP7tMJ98q6ZtKC/mPuQqppaGXzfq/VUdsU4pQpYwlFlNZwhFBYCUUitIaVUDji0pVwW1rEy9MaM5+3rv1LNOqLtu0LONTlyzQ6veuXbreXSx/p+hMorFPRmGFCREgJCCkBX0KGWOl642V/ZWcEef+MCbx/xoQB2d9gUlUiSufg4YJTKBIhEqHzVLsEoR6CTziiqNLe4ohuifgkqoXUvq6jdePz0dFiadvGnVcKRQXXUFhpdfPNIe+HRFMo4k3djwm/T/iX2/peLxYojDHGERHXTTVyz5X8Sz+2SZoOXBFZLCLbRKRERJYNdXmMMcZ4kiJQiIgf+CVwMTAH+LSIzIm/lTHGmMGQFIECOAsoUdWdqtoCPAQsGeIyGWOMIXnOURQAe6OWS4Gzu2YSkaXAUrfYLCKbBqFsw8EEoHKoC5EkrC46WF10sLrocHJfN0iWQBHrzFG369RUdTmwHEBEivt6iddIZXXRweqig9VFB6uLDiLS5/sKkqXrqRSYFrU8FSgborIYY4yJkiyBYg0wU0Smi0gKcAWwYojLZIwxhiTpelLVkIjcADwH+IF7VXXzUTZbnviSDRtWFx2sLjpYXXSwuujQ57oYtkN4GGOMGRzJ0vVkjDEmSVmgMMYYE9ewCxSjeagPEblXRMqj7x8RkVwRWSki29103FCWcbCIyDQReVlEtorIZhG50aWPuvoQkTQReVNE1ru6+K5Lny4iq11d/MldKDIqiIhfRN4SkSfd8qisCxHZLSIbReTttsti+/MZGVaBwob64D5gcZe0ZcCLqjoTeNEtjwYh4KuqOhtYAFzv/hdGY300Axeo6hnAXGCxiCwAbgNud3VRBVw7hGUcbDcCW6OWR3NdnK+qc6PuI+nzZ2RYBQpG+VAfqvoKcLhL8hLgfjd/P/CxQS3UEFHV/aq6zs3X4n0pFDAK60M9dW4x6F4KXAA84tJHRV0AiMhU4FLgbrcsjNK66EGfPyPDLVDEGuqjYIjKkizyVXU/eF+ewMSj5B9xRKQQmAesZpTWh+tqeRsoB1YCO4BqVQ25LKPps/JT4OtAxC2PZ/TWhQLPi8haNwQS9OMzkhT3UfRBr4b6MKOHiGQBjwJfVtUjo/WZy6oaBuaKSA7wODA7VrbBLdXgE5HLgHJVXSsi57Ulx8g64uvCWaiqZSIyEVgpIu/0ZyfDrUVhQ310d1BEJgO4afkQl2fQiEgQL0g8qKqPueRRWx8AqloNrMI7b5MjIm0/BkfLZ2Uh8FER2Y3XNX0BXgtjNNYFqlrmpuV4PyDOoh+fkeEWKGyoj+5WAFe7+auBJ4awLIPG9TvfA2xV1Z9ErRp19SEiea4lgYikAx/GO2fzMvBJl21U1IWq3qyqU1W1EO/74SVVvZJRWBcikikiY9rmgY8Am+jHZ2TY3ZktIpfg/UJoG+rjB0NcpEEjIn8EzsMbMvkg8G3gL8DDwHHAHuByVe16wnvEEZEPAK8CG+noi/4G3nmKUVUfInI63klJP96Pv4dV9RYROQHvV3Uu8BbwWVVtHrqSDi7X9fQ1Vb1sNNaFO+bH3WIA+IOq/kBExtPHz8iwCxTGGGMG13DrejLGGDPILFAYY4yJywKFMcaYuCxQGGOMicsChTHGmLgsUBhjjInLAoUxxpi4/j8AsI7A91jTDAAAAABJRU5ErkJggg==\n",
"text/plain": [
"<Figure size 432x288 with 3 Axes>"
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"# TODO: 从上面的图中能观察到什么样的现象? 这样的一个图的形状跟一个非常著名的函数形状很类似,能所出此定理吗? \n",
"# hint: [XXX]'s law\n",
"# \n",
"# "
"# TODO: 统计一下qlist中出现1次,2次,3次... 出现的单词个数, 然后画一个plot. 这里的x轴是单词出现的次数(1,2,3,..), y轴是单词个数。\n",
"# 从左到右分别是 出现1次的单词数,出现2次的单词数,出现3次的单词数...\n",
"import numpy as np\n",
"cnt = Counter(words_cnt.values())\n",
"\n",
"print(len(cnt))\n",
"cnt = dict(cnt)\n",
"sorted_cnt = sorted(cnt.items(), key=lambda cnt: cnt[1])\n",
"\n",
"X = [x[0] for x in sorted_cnt]\n",
"Y = [x[1] for x in sorted_cnt]\n",
"\n",
"# x = list(sorted_cnt.keys())\n",
"# y = list(sorted_cnt.values())\n",
"# plt.plot(Y,X )\n",
"plt.subplot(311)\n",
"plt.plot(Y, X)\n",
"plt.subplot(312)\n",
"plt.xlim((0, 150))\n",
"plt.ylim((0, 400))\n",
"plt.plot(Y, X)\n",
"plt.subplot(313)\n",
"plt.xlim((0, 50))\n",
"plt.ylim((0, 500))\n",
"plt.plot(Y, X)\n"
]
},
{
......@@ -187,13 +307,81 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 33,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"min_tf 1\n",
"[['when', 'beyonce', 'start', 'becoming', 'popular'], ['area', 'beyonce', 'compete', 'when', 'wa', 'growing'], ['when', 'beyonce', 'leave', 'destiny', 'child', 'become', 'solo', 'singer'], ['city', 'state', 'beyonce', 'grow'], ['decade', 'beyonce', 'become', 'famous'], ['r', 'b', 'group', 'wa', 'lead', 'singer'], ['album', 'made', 'worldwide', 'known', 'artist'], ['who', 'managed', 'destiny', 'child', 'group'], ['when', 'beyoncé', 'rise', 'fame'], ['role', 'beyoncé', 'destiny', 'child']]\n"
]
}
],
"source": [
"# TODO: 需要做文本方面的处理。 从上述几个常用的方法中选择合适的方法给qlist做预处理(不一定要按照上面的顺序,不一定要全部使用)\n",
"\n",
"qlist = # 更新后的问题列表"
"from nltk.corpus import stopwords\n",
"# from nltk.stem import PorterStemmer#标准化\n",
"from nltk.stem import WordNetLemmatizer\n",
"from nltk.tokenize import word_tokenize#分词\n",
"import math\n",
"\n",
"# ps = PorterStemmer()# lemmazation\n",
"whl = WordNetLemmatizer()\n",
"words = stopwords.words('english')\n",
"\n",
"sw = set(words)\n",
"sw -= {'who', 'when', 'why', 'where', 'how'}\n",
"sw.update(['\\'s', '``', '\\'\\'','?','.',',','%','$','@','&','#'])\n",
"\n",
"\n",
"def text_process(text):\n",
" seg=[]\n",
" for word in word_tokenize(text) :\n",
" if word == '' or word == ' ':\n",
" pass\n",
" else:\n",
" if word.isdigit() :\n",
" word = '#number'\n",
" else :\n",
"# word = ps.stem(word.lower())\n",
" word = whl.lemmatize(word.lower(), 'n')\n",
" \n",
" if word not in sw: \n",
" seg.append(word)\n",
" return seg\n",
"\n",
"\n",
"def process (qlist):\n",
" qlist_seg = list()\n",
" words_cnt = Counter()\n",
" \n",
" for text in qlist:\n",
" seg = text_process(text)\n",
" qlist_seg.append(seg)\n",
" words_cnt.update(seg)\n",
" \n",
" # 根据Zipf定律计算99%覆盖率下的过滤词频,解释见程序下边\n",
" # Zipf's law一个实验定律,按照从最常见到非常见排列,第二常见的频率是最常见频率的出现次数的1/2,\n",
" # 第三常见的频率是最常见的频率的1/3,第n常见的频率是最常见频率出现次数的1/n。\n",
" # 假设我们文本的词频符合该定律,那么对1/n进行积分得到ln(n),为了使99%的文本得到覆盖则需ln(x)>0.99*ln(n),\n",
" # n是词type数,x是词频从高到底排列时的阈值分割点,最后x=e^(0.99*ln(n))。\n",
" value_sort = sorted(words_cnt.values(), reverse=True)\n",
" min_tf = value_sort[int(math.exp(0.99 * math.log(len(words_cnt))))]\n",
"\n",
" for cur in range(len(qlist_seg)):\n",
" qlist_seg[cur] = [word for word in qlist_seg[cur] if words_cnt[word] > min_tf] \n",
" \n",
" return qlist_seg,min_tf\n",
" \n",
"[qlist_seg,min_tf] = process (qlist)\n",
"\n",
"\n",
"print('min_tf',min_tf)\n",
"print(qlist_seg[:10])\n",
"\n"
]
},
{
......@@ -220,14 +408,15 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"# TODO \n",
"vectorizer = # 定义一个tf-idf的vectorizer\n",
"from sklearn.feature_extraction.text import TfidfVectorizer \n",
"\n",
"X_tfidf = # 结果存放在X矩阵里"
"vectorizer = TfidfVectorizer()\n",
"X_tfidf = vectorizer.fit_transform([ ' '.join(seg) for seg in qlist_seg ] )\n"
]
},
{
......@@ -245,11 +434,98 @@
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"from gensim.models import KeyedVectors\n",
"from gensim.scripts.glove2word2vec import glove2word2vec\n",
"import numpy as np\n",
"\n",
"# 将GloVe转为word2vec\n",
"_ = glove2word2vec('../../03-githubNLP/glove/glove.6B.100d.txt', '../../03-githubNLP/glove/glove2word2vec.6B.100d.txt')\n",
"model = KeyedVectors.load_word2vec_format('../../03-githubNLP/glove/glove2word2vec.6B.100d.txt')\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"E:\\apppath\\lib\\site-packages\\ipykernel_launcher.py:1: DeprecationWarning: Call to deprecated `wv` (Attribute will be removed in 4.0.0, use self instead).\n",
" \"\"\"Entry point for launching an IPython kernel.\n"
]
},
{
"data": {
"text/plain": [
"<gensim.models.keyedvectors.Word2VecKeyedVectors at 0x2586fc66e08>"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"model.wv"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"E:\\apppath\\lib\\site-packages\\ipykernel_launcher.py:10: DeprecationWarning: Call to deprecated `wv` (Attribute will be removed in 4.0.0, use self instead).\n",
" # Remove the CWD from sys.path while we load stuff.\n",
"E:\\apppath\\lib\\site-packages\\ipykernel_launcher.py:14: RuntimeWarning: invalid value encountered in true_divide\n",
" \n"
]
}
],
"source": [
"import numpy as np\n",
"\n",
"\n",
"def sent_vector(sent):\n",
"\n",
" sent_vector = np.zeros((100), np.float)\n",
" sent_size = len(sent)\n",
" for x in sent:\n",
" try:\n",
" sent_vector += model.wv[x]\n",
" except KeyError:\n",
" sent_size -= 1\n",
"\n",
" return sent_vector / sent_size\n",
"\n",
"\n",
"X = np.zeros((len(qlist_seg), 100), np.float)\n",
"\n",
"for index in range(X.shape[0]):\n",
" X[index] = sent_vector(qlist_seg[index])\n",
"\n",
"norm = np.linalg.norm(X, axis=1,keepdims=True)\n",
"X = X / norm"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# TODO 基于Glove向量获取句子向量\n",
"\n",
"\n",
"emb = # 这是 D*H的矩阵,这里的D是词典库的大小, H是词向量的大小。 这里面我们给定的每个单词的词向量,\n",
" # 这需要从文本中读取\n",
" \n",
......@@ -266,13 +542,72 @@
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"from bert_embedding import BertEmbedding\n",
"\n",
"bert_embedding = BertEmbedding(model='bert_12_768_12', dataset_name='book_corpus_wiki_en_cased')"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(768,)"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"qlist_seg[1]\n",
"vector = bert_embedding.embedding(['hh'])\n",
"# type(vector[0][1][0])\n",
"vector[0][1][0].shape"
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [],
"source": [
"# bert_embedding.embedding(qlist_seg[1])\n",
"\n",
"def sent_bert_vector(sent):\n",
" avg_vector = np.zeros(768)\n",
" sent_vector = bert_embedding.embedding(sent)\n",
" for vector in sent_vector :\n",
" avg_vector += vector[1][0]\n",
" return avg_vector / len(sent_vector)\n",
" \n",
" "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# TODO 基于BERT的句子向量计算\n",
"# X_bert = []\n",
"# leng = len(qlist_seg)\n",
"# for index in range(len(qlist_seg)):\n",
"# X_bert.append(sent_bert_vector(qlist_seg[index]))\n",
"# if index % 10 ==0 :\n",
"# print(index)\n",
" \n",
"\n",
"X_bert = # 每一个句子的向量结果存放在X_bert矩阵里。行数为句子的总个数,列数为一个句子embedding大小。 "
" "
]
},
{
......@@ -286,13 +621,6 @@
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
......@@ -302,10 +630,12 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"from queue import PriorityQueue\n",
"\n",
"def get_top_results_tfidf_noindex(query):\n",
" # TODO 需要编写\n",
" \"\"\"\n",
......@@ -314,22 +644,44 @@
" 2. 计算跟每个库里的问题之间的相似度\n",
" 3. 找出相似度最高的top5问题的答案\n",
" \"\"\"\n",
" query = text_process(query)\n",
" tf_idf = vectorizer.transform([' '.join(query)])\n",
" \n",
" top_idxs = [] # top_idxs存放相似度最高的(存在qlist里的)问题的下标 \n",
" # hint: 请使用 priority queue来找出top results. 思考为什么可以这么做? \n",
" sim = (X_tfidf * tf_idf.T).toarray()\n",
" p = PriorityQueue()\n",
" for cur in range(sim.shape[0]):\n",
" p.put((sim[cur][0],cur))\n",
" if len(p.queue) > 5 :\n",
" p.get()\n",
" \n",
" return alist[top_idxs] # 返回相似度最高的问题对应的答案,作为TOP5答案 "
" p_rank = sorted(p.queue, reverse=True, key=lambda x:x[0])\n",
" top_idxs = [x[1] for x in p_rank] # top_idxs存放相似度最高的(存在qlist里的)问题的下表 \n",
" \n",
" return [alist[x] for x in top_idxs ] # 返回相似度最高的问题对应的答案,作为TOP5答案 "
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 11,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"['Chengdu Shuangliu International Airport', 'Chengdu Shuangliu International Airport', 'aerodrome with facilities for flights to take off and land', 'Nanjing Dajiaochang Airport', 'newspapers']\n",
"['Plymouth City Airport', 'aerodrome with facilities for flights to take off and land', 'Nanjing Dajiaochang Airport', 'Wye College campus', 'six months']\n",
"['Myanmar', 'foreign aid', '10 days', 'the British government', 'access is blocked to local and foreign websites including avesta.tj, Tjknews.com, ferghana.ru, centrasia.ru']\n",
"['Myanmar', 'Isabel', 'foreign aid', '10 days', 'Gaston']\n"
]
}
],
"source": [
"# TODO: 编写几个测试用例,并输出结果\n",
"print (get_top_results_tfidf_noindex(\"\"))\n",
"print (get_top_results_tfidf_noindex(\"\"))"
"print (get_top_results_tfidf_noindex(\"Which airport was shut down?\"))\n",
"print (get_top_results_tfidf_noindex(\"Which airport is closed?\"))\n",
"print (get_top_results_tfidf_noindex(\"What government blocked aid after Cyclone Nargis?\"))\n",
"print (get_top_results_tfidf_noindex(\"Which government stopped aid after Hurricane Nargis?\"))"
]
},
{
......@@ -349,12 +701,112 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 62,
"metadata": {},
"outputs": [],
"source": [
"# TODO 请创建倒排表\n",
"inverted_idx = {} # 定一个一个简单的倒排表,是一个map结构。 循环所有qlist一遍就可以"
"# TODO 请创建倒排表 https://www.jianshu.com/p/bbd258f99fd3\n",
"from collections import defaultdict\n",
"from queue import PriorityQueue\n",
"\n",
"\n",
"def get_invertedTable(qlist_seg):\n",
" inverted_idx = defaultdict(set)\n",
"\n",
" for index in range(len(qlist_seg)):\n",
" for word in qlist_seg[index]:\n",
" inverted_idx[word].add(index)\n",
" return inverted_idx\n",
"\n",
"\n",
"inverted_idx = get_invertedTable(qlist_seg)\n",
"\n",
"\n",
"def get_top_results_v2c_noindex(sent):\n",
"\n",
" candidates = set()\n",
" sent = text_process(sent)\n",
" \n",
" for word in sent:\n",
" candidates |= inverted_idx[word]\n",
"\n",
" v2c_sent = sent_vector(sent)\n",
" sim = X[list(candidates)] * v2c_sent.T\n",
" norm = np.linalg.norm(sim, axis=1, keepdims=True)\n",
" sim /= norm\n",
"\n",
" p = PriorityQueue()\n",
" for cur in range(sim.shape[0]):\n",
" p.put((sim[cur][0], cur))\n",
" if len(p.queue) > 5:\n",
" p.get()\n",
"\n",
" p_rank = sorted(p.queue, reverse=True, key=lambda x: x[0])\n",
" print([x[0] for x in p_rank])\n",
"\n",
" top_idxs = [x[1] for x in p_rank] # top_idxs存放相似度最高的(存在qlist里的)问题的下表\n",
"\n",
" return [alist[x] for x in top_idxs] # 返回相似度最高的问题对应的答案,作为TOP5答案"
]
},
{
"cell_type": "code",
"execution_count": 63,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"airport\n",
"wa\n",
"shut\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"E:\\apppath\\lib\\site-packages\\ipykernel_launcher.py:10: DeprecationWarning: Call to deprecated `wv` (Attribute will be removed in 4.0.0, use self instead).\n",
" # Remove the CWD from sys.path while we load stuff.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"[0.07330567254298624, 0.06972466973314612, 0.06948123431150044, 0.06930443592810431, 0.06822893061375211]\n",
"['a few hundred years old', 'Lee and Guenther', 'sizing', 'skandhas', \"Mozart's Requiem\"]\n",
"airport\n",
"closed\n",
"[0.030079029833840667, 0.028042975518527687, 0.020720007655157007, 0.020296846939214908, 0.019316392495591254]\n",
"['acting', 'Men in Black', 'November 18, 2008', '4', 'November 2003']\n",
"government\n",
"blocked\n",
"aid\n",
"cyclone\n",
"nargis\n",
"[0.08121490453404367, 0.07908663825738113, 0.07476291090892337, 0.0730369046518037, 0.07110477811995523]\n",
"['24 million', 'Jay Z', 'The Beyoncé Experience', 'The Beyoncé Experience', \"Dwayne Wiggins's Grass Roots Entertainment\"]\n",
"government\n",
"stopped\n",
"aid\n",
"hurricane\n",
"nargis\n",
"[0.01048527515772671, 0.008774907077175162, 0.006720722994909599, 0.006713101288648199, 0.006408367260858867]\n",
"['158.8 million', '1995', '2013', 'top five', '119.5 million']\n"
]
}
],
"source": [
"print(get_top_results_v2c_noindex(\"Which airport was shut down?\"))\n",
"print(get_top_results_v2c_noindex(\"Which airport is closed?\"))\n",
"print(\n",
" get_top_results_v2c_noindex(\n",
" \"What government blocked aid after Cyclone Nargis?\"))\n",
"print(\n",
" get_top_results_v2c_noindex(\n",
" \"Which government stopped aid after Hurricane Nargis?\"))"
]
},
{
......@@ -371,16 +823,131 @@
},
{
"cell_type": "code",
"execution_count": 39,
"metadata": {},
"outputs": [],
"source": [
"def write_json(file,data):\n",
" # 将bx列表写入json文件\n",
" with open(file, 'w') as f_obj: \n",
" json.dump(data, f_obj)\n",
" \n",
"write_json('related_words.josn',related_words)"
]
},
{
"cell_type": "code",
"execution_count": 43,
"metadata": {},
"outputs": [],
"source": [
"def read_json(file):\n",
" # 读取存储于json文件中的列表\n",
" with open(file, 'r') as f_obj:\n",
" jlist = json.load(f_obj)\n",
" return jlist\n",
"\n",
"related_words = read_json('related_words.josn')\n"
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"句子包含单词数量: 18552\n",
"1000\n",
"2000\n",
"3000\n",
"4000\n",
"5000\n",
"6000\n",
"7000\n",
"8000\n",
"9000\n",
"10000\n",
"11000\n",
"12000\n",
"13000\n",
"14000\n",
"15000\n",
"16000\n",
"17000\n",
"18000\n"
]
}
],
"source": [
"def sim_word_top10(qlist_seg):\n",
" # _ = glove2word2vec('../../03-githubNLP/glove/glove.6B.100d.txt',\n",
" # '../../03-githubNLP/glove/glove2word2vec.6B.100d.txt')\n",
" model = KeyedVectors.load_word2vec_format(\n",
" '../../03-githubNLP/glove/glove2word2vec.6B.100d.txt')\n",
"\n",
" qdict = get_qdict(qlist_seg)\n",
" qdict = list(qdict)\n",
" w1_dict = {}\n",
" sim = {}\n",
"\n",
" count_loop = 0\n",
" for w1 in qdict:\n",
" count_loop += 1\n",
" if count_loop % 1000 == 0: print(count_loop)\n",
"\n",
" w2_list = []\n",
" for w2 in qdict:\n",
" if w1 == w2:\n",
" w2_list.append(0)\n",
" continue\n",
"\n",
" try:\n",
" sim_word = model[w1].dot(model[w2].T)\n",
" w2_list.append(sim_word)\n",
" except KeyError:\n",
" # print('err', w1)\n",
" w2_list.append(0)\n",
" continue\n",
"\n",
" sim_list = w2_list\n",
" sorted_id = sorted(range(len(sim_list)),\n",
" key=lambda k: sim_list[k],\n",
" reverse=True)[:10]\n",
" sim[w1] = [qdict[id] for id in sorted_id]\n",
"\n",
" write_json('related_words.josn',sim)\n",
" return sim\n",
"\n",
"\n",
"def get_qdict(qlist_seg):\n",
" qdict = set()\n",
" for sent in qlist_seg:\n",
" for word in sent:\n",
" qdict.add(word)\n",
"\n",
" print(\"句子包含单词数量:\", len(qdict))\n",
" return qdict\n",
"\n",
"\n",
"# related_words = sim_word_top10(qlist_seg) #跑的很慢"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# TODO 读取语义相关的单词\n",
"def get_related_words(file):\n",
"# def get_related_words(file):\n",
" \n",
" return related_words\n",
"# related_words = X*X\n",
"# return related_words\n",
"\n",
"related_words = get_related_words('related_words.txt') # 直接放在文件夹的根目录下,不要修改此路径。"
"# related_words = get_related_words('related_words.txt') # 直接放在文件夹的根目录下,不要修改此路径。"
]
},
{
......@@ -398,6 +965,88 @@
},
{
"cell_type": "code",
"execution_count": 51,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['woman', '-', 'who', 'old', 'father', 'one', 'boy', 'young', 'girl', 'life']"
]
},
"execution_count": 51,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"related_words['man']\n",
"# related_words.keys()"
]
},
{
"cell_type": "code",
"execution_count": 69,
"metadata": {},
"outputs": [],
"source": [
"def get_simInvertedTable(qlist_seg):\n",
" inverted_idx = defaultdict(set)\n",
" simInverted_idx = defaultdict(set)\n",
"\n",
" for index in range(len(qlist_seg)):#统计了每个词出现在某个句子的引索\n",
" for word in qlist_seg[index]:\n",
" inverted_idx[word].add(index)\n",
" \n",
" simInverted_idx = inverted_idx\n",
" for w in list(related_words.keys()):#把相似单词的引索合并在一起\n",
" for x in related_words[w]:\n",
" simInverted_idx[w] |= inverted_idx[x]\n",
" return simInverted_idx\n",
"\n",
"simInvertedTable = get_simInvertedTable(qlist_seg)#获取添加了top10单词的倒排表"
]
},
{
"cell_type": "code",
"execution_count": 81,
"metadata": {},
"outputs": [],
"source": [
"# simInvertedTable = dict(simInvertedTable)\n",
"# for x in simInvertedTable.keys():simInvertedTable[x] = list(simInvertedTable)\n",
" \n",
"# write_json('simInvertedTable.josn',simInvertedTable)"
]
},
{
"cell_type": "code",
"execution_count": 82,
"metadata": {},
"outputs": [
{
"ename": "MemoryError",
"evalue": "",
"output_type": "error",
"traceback": [
"\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[1;31mMemoryError\u001b[0m Traceback (most recent call last)",
"\u001b[1;32m<ipython-input-82-5b706a86e7d3>\u001b[0m in \u001b[0;36m<module>\u001b[1;34m\u001b[0m\n\u001b[1;32m----> 1\u001b[1;33m \u001b[0msimInvertedTable\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mread_json\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;34m'simInvertedTable.josn'\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m",
"\u001b[1;32m<ipython-input-43-113997c42c8e>\u001b[0m in \u001b[0;36mread_json\u001b[1;34m(file)\u001b[0m\n\u001b[0;32m 2\u001b[0m \u001b[1;31m# 读取存储于json文件中的列表\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 3\u001b[0m \u001b[1;32mwith\u001b[0m \u001b[0mopen\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mfile\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;34m'r'\u001b[0m\u001b[1;33m)\u001b[0m \u001b[1;32mas\u001b[0m \u001b[0mf_obj\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m----> 4\u001b[1;33m \u001b[0mjlist\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mjson\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mload\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mf_obj\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 5\u001b[0m \u001b[1;32mreturn\u001b[0m \u001b[0mjlist\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 6\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n",
"\u001b[1;32mE:\\apppath\\lib\\json\\__init__.py\u001b[0m in \u001b[0;36mload\u001b[1;34m(fp, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)\u001b[0m\n\u001b[0;32m 294\u001b[0m \u001b[0mcls\u001b[0m\u001b[1;33m=\u001b[0m\u001b[0mcls\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mobject_hook\u001b[0m\u001b[1;33m=\u001b[0m\u001b[0mobject_hook\u001b[0m\u001b[1;33m,\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 295\u001b[0m \u001b[0mparse_float\u001b[0m\u001b[1;33m=\u001b[0m\u001b[0mparse_float\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mparse_int\u001b[0m\u001b[1;33m=\u001b[0m\u001b[0mparse_int\u001b[0m\u001b[1;33m,\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 296\u001b[1;33m parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)\n\u001b[0m\u001b[0;32m 297\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 298\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n",
"\u001b[1;32mE:\\apppath\\lib\\json\\__init__.py\u001b[0m in \u001b[0;36mloads\u001b[1;34m(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)\u001b[0m\n\u001b[0;32m 346\u001b[0m \u001b[0mparse_int\u001b[0m \u001b[1;32mis\u001b[0m \u001b[1;32mNone\u001b[0m \u001b[1;32mand\u001b[0m \u001b[0mparse_float\u001b[0m \u001b[1;32mis\u001b[0m \u001b[1;32mNone\u001b[0m \u001b[1;32mand\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 347\u001b[0m parse_constant is None and object_pairs_hook is None and not kw):\n\u001b[1;32m--> 348\u001b[1;33m \u001b[1;32mreturn\u001b[0m \u001b[0m_default_decoder\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mdecode\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0ms\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 349\u001b[0m \u001b[1;32mif\u001b[0m \u001b[0mcls\u001b[0m \u001b[1;32mis\u001b[0m \u001b[1;32mNone\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 350\u001b[0m \u001b[0mcls\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mJSONDecoder\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
"\u001b[1;32mE:\\apppath\\lib\\json\\decoder.py\u001b[0m in \u001b[0;36mdecode\u001b[1;34m(self, s, _w)\u001b[0m\n\u001b[0;32m 335\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 336\u001b[0m \"\"\"\n\u001b[1;32m--> 337\u001b[1;33m \u001b[0mobj\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mend\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mraw_decode\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0ms\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0midx\u001b[0m\u001b[1;33m=\u001b[0m\u001b[0m_w\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0ms\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;36m0\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mend\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 338\u001b[0m \u001b[0mend\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0m_w\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0ms\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mend\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mend\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 339\u001b[0m \u001b[1;32mif\u001b[0m \u001b[0mend\u001b[0m \u001b[1;33m!=\u001b[0m \u001b[0mlen\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0ms\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
"\u001b[1;32mE:\\apppath\\lib\\json\\decoder.py\u001b[0m in \u001b[0;36mraw_decode\u001b[1;34m(self, s, idx)\u001b[0m\n\u001b[0;32m 351\u001b[0m \"\"\"\n\u001b[0;32m 352\u001b[0m \u001b[1;32mtry\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 353\u001b[1;33m \u001b[0mobj\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mend\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mscan_once\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0ms\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0midx\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 354\u001b[0m \u001b[1;32mexcept\u001b[0m \u001b[0mStopIteration\u001b[0m \u001b[1;32mas\u001b[0m \u001b[0merr\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 355\u001b[0m \u001b[1;32mraise\u001b[0m \u001b[0mJSONDecodeError\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;34m\"Expecting value\"\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0ms\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0merr\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mvalue\u001b[0m\u001b[1;33m)\u001b[0m \u001b[1;32mfrom\u001b[0m \u001b[1;32mNone\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
"\u001b[1;31mMemoryError\u001b[0m: "
]
}
],
"source": [
"simInvertedTable = read_json('simInvertedTable.josn')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
......@@ -429,11 +1078,28 @@
" 2. 对于候选文档,计算跟输入问题之间的相似度\n",
" 3. 找出相似度最高的top5问题的答案\n",
" \"\"\"\n",
" query = text_process(query)\n",
" v2c_sent = sent_vector(query)\n",
" \n",
" top_idxs = [] # top_idxs存放相似度最高的(存在qlist里的)问题的下表 \n",
" # hint: 利用priority queue来找出top results. 思考为什么可以这么做? \n",
" candidates = set()\n",
" candidates.add(x) for x in query\n",
" \n",
" return alist[top_idxs] # 返回相似度最高的问题对应的答案,作为TOP5答案"
" sim = X[list(candidates)] * v2c_sent.T\n",
" norm = np.linalg.norm(sim, axis=1, keepdims=True)\n",
" sim /= norm\n",
"\n",
" p = PriorityQueue()\n",
" for cur in range(sim.shape[0]):\n",
" p.put((sim[cur][0], cur))\n",
" if len(p.queue) > 5:\n",
" p.get()\n",
"\n",
" p_rank = sorted(p.queue, reverse=True, key=lambda x: x[0])\n",
" print([x[0] for x in p_rank])\n",
"\n",
" top_idxs = [x[1] for x in p_rank] # top_idxs存放相似度最高的(存在qlist里的)问题的下表\n",
"\n",
" return [alist[x] for x in top_idxs] # 返回相似度最高的问题对应的答案,作为TOP5答案\n"
]
},
{
......@@ -509,9 +1175,47 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 28,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"[nltk_data] Downloading package reuters to\n",
"[nltk_data] C:\\Users\\chencheng\\AppData\\Roaming\\nltk_data...\n"
]
},
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 28,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# import nltk #下载reuters数据集\n",
"# nltk.download('reuters')"
]
},
{
"cell_type": "code",
"execution_count": 29,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"54716\n",
"[['ASIAN', 'EXPORTERS', 'FEAR', 'DAMAGE', 'FROM', 'U', '.', 'S', '.-', 'JAPAN', 'RIFT', 'Mounting', 'trade', 'friction', 'between', 'the', 'U', '.', 'S', '.', 'And', 'Japan', 'has', 'raised', 'fears', 'among', 'many', 'of', 'Asia', \"'\", 's', 'exporting', 'nations', 'that', 'the', 'row', 'could', 'inflict', 'far', '-', 'reaching', 'economic', 'damage', ',', 'businessmen', 'and', 'officials', 'said', '.'], ['They', 'told', 'Reuter', 'correspondents', 'in', 'Asian', 'capitals', 'a', 'U', '.', 'S', '.', 'Move', 'against', 'Japan', 'might', 'boost', 'protectionist', 'sentiment', 'in', 'the', 'U', '.', 'S', '.', 'And', 'lead', 'to', 'curbs', 'on', 'American', 'imports', 'of', 'their', 'products', '.'], ['But', 'some', 'exporters', 'said', 'that', 'while', 'the', 'conflict', 'would', 'hurt', 'them', 'in', 'the', 'long', '-', 'run', ',', 'in', 'the', 'short', '-', 'term', 'Tokyo', \"'\", 's', 'loss', 'might', 'be', 'their', 'gain', '.'], ['The', 'U', '.', 'S', '.', 'Has', 'said', 'it', 'will', 'impose', '300', 'mln', 'dlrs', 'of', 'tariffs', 'on', 'imports', 'of', 'Japanese', 'electronics', 'goods', 'on', 'April', '17', ',', 'in', 'retaliation', 'for', 'Japan', \"'\", 's', 'alleged', 'failure', 'to', 'stick', 'to', 'a', 'pact', 'not', 'to', 'sell', 'semiconductors', 'on', 'world', 'markets', 'at', 'below', 'cost', '.'], ['Unofficial', 'Japanese', 'estimates', 'put', 'the', 'impact', 'of', 'the', 'tariffs', 'at', '10', 'billion', 'dlrs', 'and', 'spokesmen', 'for', 'major', 'electronics', 'firms', 'said', 'they', 'would', 'virtually', 'halt', 'exports', 'of', 'products', 'hit', 'by', 'the', 'new', 'taxes', '.'], ['\"', 'We', 'wouldn', \"'\", 't', 'be', 'able', 'to', 'do', 'business', ',\"', 'said', 'a', 'spokesman', 'for', 'leading', 'Japanese', 'electronics', 'firm', 'Matsushita', 'Electric', 'Industrial', 'Co', 'Ltd', '&', 'lt', ';', 'MC', '.', 'T', '>.'], ['\"', 'If', 'the', 'tariffs', 'remain', 'in', 'place', 'for', 'any', 'length', 'of', 'time', 'beyond', 'a', 'few', 'months', 'it', 'will', 'mean', 'the', 'complete', 'erosion', 'of', 'exports', '(', 'of', 'goods', 'subject', 'to', 'tariffs', ')', 'to', 'the', 'U', '.', 'S', '.,\"', 'said', 'Tom', 'Murtha', ',', 'a', 'stock', 'analyst', 'at', 'the', 'Tokyo', 'office', 'of', 'broker', '&', 'lt', ';', 'James', 'Capel', 'and', 'Co', '>.'], ['In', 'Taiwan', ',', 'businessmen', 'and', 'officials', 'are', 'also', 'worried', '.'], ['\"', 'We', 'are', 'aware', 'of', 'the', 'seriousness', 'of', 'the', 'U', '.', 'S', '.'], ['Threat', 'against', 'Japan', 'because', 'it', 'serves', 'as', 'a', 'warning', 'to', 'us', ',\"', 'said', 'a', 'senior', 'Taiwanese', 'trade', 'official', 'who', 'asked', 'not', 'to', 'be', 'named', '.']]\n"
]
}
],
"source": [
"from nltk.corpus import reuters\n",
"\n",
......@@ -525,6 +1229,40 @@
]
},
{
"cell_type": "code",
"execution_count": 84,
"metadata": {},
"outputs": [],
"source": [
"from nltk.lm.preprocessing import padded_everygram_pipeline\n",
"from nltk.lm import MLE\n",
"\n",
"train, vocab = padded_everygram_pipeline(2, corpus)\n",
"lm = MLE(2)\n",
"lm.fit(train, vocab)"
]
},
{
"cell_type": "code",
"execution_count": 89,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"0.0"
]
},
"execution_count": 89,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# lm.score(\"officials\",['Industrial'])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
......@@ -541,7 +1279,7 @@
"# TODO 构建channel probability \n",
"channel = {}\n",
"\n",
"for line in open('spell-errors.txt'):\n",
"# for line in open('spell-errors.txt'):\n",
" # TODO\n",
"\n",
"# TODO\n",
......@@ -550,6 +1288,30 @@
]
},
{
"cell_type": "code",
"execution_count": 112,
"metadata": {},
"outputs": [],
"source": [
"def get_spellProbability():\n",
" spellTable = {}\n",
" with open('spell-errors.txt') as f:\n",
" lines = f.readlines()\n",
"\n",
" for line in lines:\n",
" line = line.replace(' ', '')\n",
" cut = line.split(':')\n",
" err = cut[1].strip().split(',')\n",
" Perr = dict(zip(err, [1 / len(err)] * len(err)))\n",
" spellTable[cut[0]] = Perr\n",
"\n",
" return spellTable\n",
"\n",
"\n",
"spellTable = get_spellProbability()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
......@@ -563,14 +1325,79 @@
"metadata": {},
"outputs": [],
"source": [
"def generate_candidates(word):\n",
"def generate_candidates(word,corpus):\n",
" # 基于拼写错误的单词,生成跟它的编辑距离为1或者2的单词,并通过词典库的过滤。\n",
" # 只留写法上正确的单词。 \n",
" \n",
" \n",
" \n"
]
},
{
"cell_type": "code",
"execution_count": 130,
"metadata": {},
"outputs": [],
"source": [
"\n",
"def generate_edit_one(str):\n",
" \"\"\"\n",
" 给定一个字符串,生成编辑距离为1的字符串列表。\n",
" \"\"\"\n",
" letters = 'abcdefghijklmnopqrstuvwxyz'\n",
" splits = [(str[:i], str[i:])for i in range(len(str)+1)] #将一个单词分成两段\n",
" inserts = [L + c + R for L, R in splits for c in letters]#插入一个字符\n",
" deletes = [L + R[1:] for L, R in splits if R]#删除一个字符\n",
" replaces = [L + c + R[1:] for L, R in splits if R for c in letters]#替换一个字符\n",
" \n",
" return set(inserts+deletes+replaces)\n",
"\n",
"def generate_edit_two(str):\n",
" \"\"\"\n",
" 给定一个字符串,生成编辑距离不大于2的字符串\n",
" :param str:\n",
" :return:\n",
" \"\"\"\n",
"# return [e2 for e1 in generate_edit_one(str) for e2 in generate_edit_one(e1)]\n",
" return set([e2 for e1 in generate_edit_one(str) for e2 in generate_edit_one(e1)])\n"
]
},
{
"cell_type": "code",
"execution_count": 132,
"metadata": {},
"outputs": [],
"source": [
"spell_corpus = set(sum(corpus,[])) #嵌套列表展开"
]
},
{
"cell_type": "code",
"execution_count": 134,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"89"
]
},
"execution_count": 134,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"def generate_edit_word (str , spell_corpus):\n",
" w = generate_edit_two(str) #给定一个字符串,生成编辑距离不大于2的字符串\n",
" w.discard('')\n",
" w &=spell_corpus\n",
" return list(w)\n",
" \n",
"len(generate_edit_word('word',spell_corpus) )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
......@@ -663,7 +1490,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.1"
"version": "3.7.0"
}
},
"nbformat": 4,
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment