Commit 331e2959 by 20200519088

Upload New File

parent c441513c
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"collapsed": true
},
"source": [
"## 搭建一个简单的问答系统 (Building a Simple QA System)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"本次项目的目标是搭建一个基于检索式的简易的问答系统,这是一个最经典的方法也是最有效的方法。 \n",
"\n",
"```不要单独创建一个文件,所有的都在这里面编写,不要试图改已经有的函数名字 (但可以根据需求自己定义新的函数)```\n",
"\n",
"```预估完成时间```: 5-10小时"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 检索式的问答系统\n",
"问答系统所需要的数据已经提供,对于每一个问题都可以找得到相应的答案,所以可以理解为每一个样本数据是 ``<问题、答案>``。 那系统的核心是当用户输入一个问题的时候,首先要找到跟这个问题最相近的已经存储在库里的问题,然后直接返回相应的答案即可(但实际上也可以抽取其中的实体或者关键词)。 举一个简单的例子:\n",
"\n",
"假设我们的库里面已有存在以下几个<问题,答案>:\n",
"- <\"贪心学院主要做什么方面的业务?”, “他们主要做人工智能方面的教育”>\n",
"- <“国内有哪些做人工智能教育的公司?”, “贪心学院”>\n",
"- <\"人工智能和机器学习的关系什么?\", \"其实机器学习是人工智能的一个范畴,很多人工智能的应用要基于机器学习的技术\">\n",
"- <\"人工智能最核心的语言是什么?\", ”Python“>\n",
"- .....\n",
"\n",
"假设一个用户往系统中输入了问题 “贪心学院是做什么的?”, 那这时候系统先去匹配最相近的“已经存在库里的”问题。 那在这里很显然是 “贪心学院是做什么的”和“贪心学院主要做什么方面的业务?”是最相近的。 所以当我们定位到这个问题之后,直接返回它的答案 “他们主要做人工智能方面的教育”就可以了。 所以这里的核心问题可以归结为计算两个问句(query)之间的相似度。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 项目中涉及到的任务描述\n",
"问答系统看似简单,但其中涉及到的内容比较多。 在这里先做一个简单的解释,总体来讲,我们即将要搭建的模块包括:\n",
"\n",
"- 文本的读取: 需要从相应的文件里读取```(问题,答案)```\n",
"- 文本预处理: 清洗文本很重要,需要涉及到```停用词过滤```等工作\n",
"- 文本的表示: 如果表示一个句子是非常核心的问题,这里会涉及到```tf-idf```, ```Glove```以及```BERT Embedding```\n",
"- 文本相似度匹配: 在基于检索式系统中一个核心的部分是计算文本之间的```相似度```,从而选择相似度最高的问题然后返回这些问题的答案\n",
"- 倒排表: 为了加速搜索速度,我们需要设计```倒排表```来存储每一个词与出现的文本\n",
"- 词义匹配:直接使用倒排表会忽略到一些意思上相近但不完全一样的单词,我们需要做这部分的处理。我们需要提前构建好```相似的单词```然后搜索阶段使用\n",
"- 拼写纠错:我们不能保证用户输入的准确,所以第一步需要做用户输入检查,如果发现用户拼错了,我们需要及时在后台改正,然后按照修改后的在库里面搜索\n",
"- 文档的排序: 最后返回结果的排序根据文档之间```余弦相似度```有关,同时也跟倒排表中匹配的单词有关\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 项目中需要的数据:\n",
"1. ```dev-v2.0.json```: 这个数据包含了问题和答案的pair, 但是以JSON格式存在,需要编写parser来提取出里面的问题和答案。 \n",
"2. ```glove.6B```: 这个文件需要从网上下载,下载地址为:https://nlp.stanford.edu/projects/glove/, 请使用d=200的词向量\n",
"3. ```spell-errors.txt``` 这个文件主要用来编写拼写纠错模块。 文件中第一列为正确的单词,之后列出来的单词都是常见的错误写法。 但这里需要注意的一点是我们没有给出他们之间的概率,也就是p(错误|正确),所以我们可以认为每一种类型的错误都是```同等概率```\n",
"4. ```vocab.txt``` 这里列了几万个英文常见的单词,可以用这个词库来验证是否有些单词被拼错\n",
"5. ```testdata.txt``` 这里搜集了一些测试数据,可以用来测试自己的spell corrector。这个文件只是用来测试自己的程序。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"在本次项目中,你将会用到以下几个工具:\n",
"- ```sklearn```。具体安装请见:http://scikit-learn.org/stable/install.html sklearn包含了各类机器学习算法和数据处理工具,包括本项目需要使用的词袋模型,均可以在sklearn工具包中找得到。 \n",
"- ```jieba```,用来做分词。具体使用方法请见 https://github.com/fxsjy/jieba\n",
"- ```bert embedding```: https://github.com/imgarylai/bert-embedding\n",
"- ```nltk```:https://www.nltk.org/index.html"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 第一部分:对于训练数据的处理:读取文件和预处理"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"- ```文本的读取```: 需要从文本中读取数据,此处需要读取的文件是```dev-v2.0.json```,并把读取的文件存入一个列表里(list)\n",
"- ```文本预处理```: 对于问题本身需要做一些停用词过滤等文本方面的处理\n",
"- ```可视化分析```: 对于给定的样本数据,做一些可视化分析来更好地理解数据"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 1.1节: 文本的读取\n",
"把给定的文本数据读入到```qlist```和```alist```当中,这两个分别是列表,其中```qlist```是问题的列表,```alist```是对应的答案列表"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import pandas as pd\n",
"import numpy as np\n",
"def read_corpus(raw_data):\n",
" \"\"\"\n",
" 读取给定的语料库,并把问题列表和答案列表分别写入到 qlist, alist 里面。 在此过程中,不用对字符换做任何的处理(这部分需要在 Part 2.3里处理)\n",
" qlist = [\"问题1\", “问题2”, “问题3” ....]\n",
" alist = [\"答案1\", \"答案2\", \"答案3\" ....]\n",
" 务必要让每一个问题和答案对应起来(下标位置一致)\n",
" \"\"\"\n",
" # TODO 需要完成的代码部分 ...\n",
" # create empty list for question and answer\n",
" qlist = []\n",
" alist = []\n",
" # identify the raw data length\n",
" raw_data_len = len(raw_data) # total 442 lists\n",
" # for loop based on raw data length to extract question and answer\n",
" # 1st layer - paragraph\n",
" for i in range(0, raw_data_len):\n",
" paragraphs = raw_data[i]['paragraphs']\n",
" paragraphs_len = len(paragraphs)\n",
" # 2nd layer qas\n",
" for j in range(0, paragraphs_len):\n",
" qas = paragraphs[j]['qas']\n",
" qas_len = len(qas)\n",
" # 3rd layer question and answer\n",
" for k in range(0, qas_len):\n",
" question = qas[k]['question']\n",
" answer_list = qas[k]['answers']\n",
" if len(answer_list) != 0: # beware of empty answer\n",
" answer = answer_list[0]['text'] # answer only 1 row\n",
" qlist.append(question)\n",
" alist.append(answer)\n",
" assert len(qlist) == len(alist) # 确保长度一样\n",
" return qlist, alist\n",
" \n",
"data = pd.read_json('train-v2.0.json')\n",
"df = pd.DataFrame(data)\n",
"raw_data = df['data']\n",
"qlist, alist = read_corpus(raw_data)\n",
"# create check point - save to csv file\n",
"df1 = pd.DataFrame(qlist)\n",
"df1['qlist'] = qlist\n",
"df1['alist'] = alist\n",
"df1 = df1[['qlist', 'alist']]\n",
"df1.to_csv(r'E:\\NLP第7期\\作业\\project1\\qlist.csv')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 1.2 理解数据(可视化分析/统计信息)\n",
"对数据的理解是任何AI工作的第一步, 需要对数据有个比较直观的认识。在这里,简单地统计一下:\n",
"\n",
"- 在```qlist```出现的总单词个数\n",
"- 按照词频画一个```histogram``` plot"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"873165\n",
"45219\n"
]
}
],
"source": [
"# TODO: 统计一下在qlist中总共出现了多少个单词? 总共出现了多少个不同的单词(unique word)?\n",
"# 这里需要做简单的分词,对于英文我们根据空格来分词即可,其他过滤暂不考虑(只需分词)\n",
"import re\n",
"def word_statistics_str(sentence_str):\n",
" word_total_list = []\n",
" for str in sentence_str:\n",
" str = re.sub(r'[^\\w\\s]', '', str) #remove punctuation\n",
" words = str.split() #split the sentence by space\n",
" word_total_list += words #add individual word\n",
" return word_total_list\n",
"word_total_list = word_statistics_str(qlist)\n",
"#checking\n",
"print(len(word_total_list)) # check the total count of words in original list\n",
"word_total = set(word_total_list) #check the total count of unique words\n",
"print(len(word_total))\n",
"# print (word_total)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# TODO: 统计一下qlist中出现1次,2次,3次... 出现的单词个数, 然后画一个plot. 这里的x轴是单词出现的次数(1,2,3,..), y轴是单词个数。\n",
"# 从左到右分别是 出现1次的单词数,出现2次的单词数,出现3次的单词数... \n",
"import collections\n",
"# apply collection to check distribution\n",
"counts = collections.Counter(word_total_list)\n",
"# print(len(counts))\n",
"# check top 100 words with highest requency\n",
"top_100 = counts.most_common(5000)\n",
"# print(top_100)\n",
"# check bottom 100 words with highest requency\n",
"bottom_100 = counts.most_common()[-100:]\n",
"# print(bottom_100)\n",
"# create ranking based on top 100, it is impossible to create a ranking based on full dataset due to huge size\n",
"words = []\n",
"counting = []\n",
"for word, count in top_100:\n",
" words.append(word)\n",
" counting.append(count)\n",
"# print(words)\n",
"# print(counting)\n"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAYgAAAEOCAYAAACTqoDjAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAIABJREFUeJzt3Xl4lOW9xvHvL3sIIUAIayDsuwgkIFSQRUVccAFBca1a\nEC321Lq22rq0HvW0elqUKm7FWrUiehRUcGFHQARk3xL2PUEgbBJI8pw/EixqgAmZmXdmcn+uiwvn\nnZnMzWvgzvMuz2POOURERH4syusAIiISmlQQIiJSJhWEiIiUSQUhIiJlUkGIiEiZVBAiIlImFYSI\niJRJBSEiImVSQYiISJlUECIiUqYYrwOcCTMbAAxITk4e1rJlS6/jiIiElYULF+52zqWd7nUWznMx\nZWVluQULFngdQ0QkrJjZQudc1ulep0NMIiJSprAsCDMbYGYv5efnex1FRCRihWVBOOcmOueGp6Sk\neB1FRCRihWVBiIhI4IVlQegQk4hI4IVlQegQk4hI4IVlQVTUlj2HmbE2j3C+xFdEJNDC8ka5ivrn\n3I28PGsDHdJTGNmnORe0qUNUlHkdS0QkpITlCKKi5yDuu6g1Tw08i32HjzH8jYVcMmoWE5Zsp6hY\nIwoRkeMq9Z3UhUXFTFy6ndHT1pGTe5CmtZK4o3czruzUgNjosOxOEZHT8vVO6kpdEMcVFzs+XbGT\n56bmsHLHfhpUT+SO3s24OjOdhNhoPyQVEQkdKogz4Jxj2ppcRk3JYfGWfdSpFs/w85oxtGtDqsRV\nytM1IhKBIrogjs/m2rx582HZ2dl+//rOOeas+5bnpmYzb/0eUpPiuLVHE27qnkFyQqzfP09EJJgi\nuiCOC8Zsrgs27uH5aTlMX5NHtYQYfn5uE275WWNqJMUF9HNFRAJFBeFny7bm8/y0bD5dsYukuGhu\n6JbBbT2bUDs5ISifLyLiLyqIAFmz8wCjp+Xw0dLtxEZHMbRrI4af15T61RODmkNE5EypIAJsw+5D\nvDA9h/cXbcMMrs5M545ezWmUWsWTPCIivlJBBMnWvYcZM2M97yzYQlGx44qz63Nnn2Y0r53saS4R\nkZOJ6III9FVMZ2LX/iO8PHM9b361mSOFRVzcvi5DshrSLK0q9asnEq2pPEQkRER0QRwXCiOIH9tz\n6Civzd7A63M2cqCgEIDYaKNhjSpkpFYhIzWJxqlVyKiVROPUJNJrJOqubREJKhWExw4cOcaK7fvZ\n9O0hNn57uOT33SW/Hzpa9P3roqOMBtUTyUitQuPUJDJSq9CiTjI9m9fSBIIiEhC+FoRuDw6Q5IRY\nujVNpVvT1B9sd86x++DRHxZH6e8fLN7GgSMlo46hXRvx31e1x0wlISLeUEEEmZmRlhxPWnI8WY1r\n/uA55xz7Dh9jzMz1vDhjHfExUTwyoK1KQkQ8oYIIIWZGjaQ4HujfimNFxbw6ewPxsVE82L+1SkJE\ngi4sC+KEq5i8jhIQZsbDl7ahoLCIMTPWkxATzd0XtvQ6lohUMmF5+UxlWJPazHj88vYMyUrnb1Oy\neWH6Oq8jiUglE5YjiMoiKsp4cmAHCgqLeXryauJjori1RxOvY4lIJaGCCHHRUcYzg8+m4Fgxj3+0\nkvjYKK4/J8PrWCJSCYTlIabKJiY6ilFDO9G3dW0e+r/ljF+41etIIlIJqCDCRFxMFH+/vjM9mtfi\n/vFLmLBku9eRRCTCqSDCSEJsNC/flEVW45rc/c5iJi/f6XUkEYlgKogwkxgXzWs/70KH9BTuensR\nb361iWNFxV7HEpEIpIIIQ1XjYxh7S1c6NazBQ/+3nL7PTGfc11tUFCLiVyqIMJWSGMs7t3fjtZ9n\nUT0xjvvfW8r5z8zg3QVbKFRRiIgfhOVsrqG4HoSXnHNMWZXLX6esZfm2/WSkVuGuvi24smN9YjSV\nuIj8iKb7roScc3yxKpe/frGWFdv30zi1Ctd0aUSf1mm0qpOs+ZxEBFBBVGrOOT5fuYvR09exZMs+\nAOqlJNC7VRq9Wtbm3OapJCfEepxSRLyighAAduYfYcbaXKavyWN29m4OFBQSE2VkZtSgS+OaZGbU\noFOj6lSvEud1VBEJEhWE/MSxomIWbtrL9DV5fJmzm5U79lNUXPL/v3ntqnRuVJ2z0qvTpm4yLesm\nU02jDJGIpBXl5Cdio6N+sMrd4aOFLNmSz6LNe1m4aS+frdzFuAX/mcajQfVE2tWvxv39W9G8drJX\nsUXEIxpByPecc+zIP8LqnftZteMAq3ceYFZ2HlFm/PPWrrRvELnTq4tUJjrEJH6xYfchbnjlK/Yf\nOcY/ft7lJ8ukikj48bUgdJG8nFKTWkmMG9GdWlXjufHV+czO3u11JBEJEhWEnFaD6omMu707GalV\nuHXs10xfk+t1JBEJAhWE+CQtOZ5/D+9Gs9pV+dXb37Blz2GvI4lIgKkgxGfVq8Qx5oZMHHDnm4so\nKCzyOpKIBJAKQsqlUWoVnhl8Nsu25fPHj1Z6HUdEAiikCsLMksxsgZld5nUWObl+7eoy/Lym/Gve\nZj5cvM3rOCISIAEtCDN7zcxyzWz5j7b3N7M1ZpZjZg+e8NQDwLhAZhL/uO+iVnRpXIPfvr+MTd8e\n8jqOiARAoEcQY4H+J24ws2hgNHAx0BYYamZtzexCYCWgS2TCQGx0FKOGdqLYOZ6bmuN1HBEJgIAW\nhHNuJrDnR5u7AjnOufXOuaPAv4ErgN5AN+A6YJiZhdThL/mpeimJXNulER98s01XNYlEIC/+EW4A\nbDnh8VaggXPuIefcr4G3gJedc2Uui2Zmw0vPUyzIy8sLQlw5ldt7NcUMxsxc53UUEfGzkPsp3Tk3\n1jn30Smef8k5l+Wcy0pLSwtmNClDvZRErs5MZ9yCreTuP+J1HBHxIy8KYhvQ8ITH6aXbfGZmA8zs\npfz8fL8GkzMzolczCouKeXnWeq+jiIgfeVEQXwMtzKyJmcUB1wITyvMFnHMTnXPDU1I0u2goyEhN\n4vKz6/OveZvZc+io13FExE8Cuh6Emb1NycnnWma2FXjEOfeqmY0EPgWigdeccysCmUMC75d9mvPB\n4u38/B/zqZeSAEDHhjW4/bymREVpLWyRcBSW032b2QBgQPPmzYdlZ2d7HUdKPTlpFTPWlFw4cLSo\nmPV5h7j0rHo8M+RsEmKjPU4nIsdpPQjxlHOOV2Zt4IlPVpGVUYPHrmhHUlwMSfExpCXHex1PpFLT\nkqPiKTNj2HlNqV89kbvHLebSUbO/f+7Vm7M4v00dD9OJiC/CsiBOOMTkdRQ5jUs71KN1vWRW7djP\nsaJinvxkNW9+tVkFIRIGwrIgnHMTgYlZWVnDvM4ip9csrSrN0qoCsHbXQcbMWEfu/iPUrpbgcTIR\nOZWQu1FOItvgzHSKHbz/jWaBFQl1KggJqqZpVcnMqMG7C7YQzhdIiFQGYVkQupM6vA3OTGdd3iG+\n2bLP6ygicgq6zFWC7sCRY3R9Ygqt6iZzXotapCXH07JOMq3rVSMlMdbreCIRT5e5SshKTojlFz2b\n8K95m3hu6z6O/4wSHxPFO7d3p2PD6t4GFBFAIwjxWFGxI/fAEVbvOMBvxi0mq3FNXr7ptD/YiEgF\n+DqC0DkI8VR0lFEvJZE+rWtzQ7cMvli1iw27tYSpSCgIy4LQbK6R6cbuGcRGRfHqbE0bLhIKdA5C\nQkbt5ASu7FSf8Qu3Ui8lkeSEGM5pkkrLOlUx04ywIsGmgpCQMvy8Zny8dAd//nTN99vSkuNpWCOR\ntvWrcWXHBnRqVINoTSEuEnA6SS0hxzlHQWEx3x46ysy1eSzYuJft+75j8ZZ9fHesiMTYaFrUqUr9\nlESGndeEzIyaXkcWCSua7lsizsGCQqas2sXiLfvIyT3Iiu37iYuOYso9vUiK12BYxFcRXRBaMEgA\nFm7ay6AX5vCLHk347SVtdNhJxEcRXRDHaQQh9727hHcXbiUpLprqVeK4sG0dHhnQVie1RU5Bd1JL\npfCnq9rTvVkqS7fms373IcbO2UhW4xpc1qG+19FEwp4KQsJafEw0AzunM7BzOoVFxQx8YQ6//2A5\n9VISycyo4XU8kbAWljfKiZQlJjqK/72mI1UTYhgyZi5PTlrFoYJCr2OJhC0VhESUZmlV+eiunlzV\nqQFjZqxn0AtzNHWHyBnSSWqJWDPW5jHyzUUcKCikWVoSgzLTubO31jEX0WR9Uun1apnGpF/35OFL\n25CaFM//TF7D7OzdXscSCRsaQUilUFBYRN+/zCA5IYY3bjuHtOR4ryOJeCaiRxAi5RUfE80jA9qy\nPu8Ql4yaxUGdvBY5rdMWhJmlBiOISKD1a1eXMTdlknegQIeaRHzgywhinpm9a2aXmG5PlTDXo3kt\nkuNjmLY61+soIiHPl4JoCbwE3Ahkm9l/m1nLwMYSCYzY6Ch6tqzF56t28d7CrYTzOTiRQDttQbgS\nnzvnhgLDgJuB+WY2w8y6BzyhiJ9dnZlOwbEi7nl3Ca/O3uB1HJGQ5dM5CDP7LzNbANwL3AXUAu4B\n3gpwPhG/69u6DssevYh+bevwp49Xcc2YuXyybIfXsURCji+HmOYC1YArnXOXOufed84VOucWAC8G\nNp5IYERFGaOGduKB/q3Ztu877n5nMfsOH/U6lkhI8aUgWjnn/uic2/rjJ5xzTwcg02npRjnxh4TY\naO7o3YyXb8qioLCY8Qt/8i0uUqn5UhCfmVn14w/MrIaZfRrATKflnJvonBuekpLiZQyJEG3qVaNT\no+q8v2ib11FEQoovBZHmnNt3/IFzbi9QO3CRRIKvX9u6rNyxn535R7yOIhIyfCmIIjNrdPyBmWUA\nujZQIkrf1iU/83yxapfHSURChy8LBj0EzDazGYABPYHhAU0lEmQt61Sldd1knpq0mo4Nq9O+gQ5f\nivhyH8RkoDPwDvBvINM55+k5CBF/MzP+cUsXkuKj+c24xRQUFnkdScRzvk7WFw/sAfYDbc3svMBF\nEvFGvZREnhrUgbW7DvLEx6tUElLpnfYQk5k9DVwDrACKSzc7YGYAc4l4ok+r2gzJSuefczfxZc5u\n3rvjZ1SvEud1LBFP+HIO4kpK7oUoCHQYkVDw31edRfdmqTwwfhl/+HAFo4Z28jqSiCd8OcS0HogN\ndBCRUBETHcVVndIZdl4TJizZzifLdnDkmA43SeXjywjiMLDYzKYA348inHO/ClgqkRAw/LxmfL5y\nF3e+uYiYKCMjtQqdGtXgpu4ZtKlXjdhorbclkc2XgphQ+kukUklJjGXiXT2YtjqPZdv2kZN7kEnL\ndjB+4VbSkuMZ2LkBvz6/JYlx0V5HFQkIn9akNrNEoJFzbk3gI/lOa1JLsO07fJTJy3fy/jfbmL9h\nDw1rJjL8vGYMyUonPkZFIeHBb2tSm9kAYDEwufRxRzPz+4jCzNqY2YtmNt7M7vD31xfxh+pV4ri2\nayPG3d6dF2/IpHpiHL//YDlPTVrtdTQRv/PlIOqjQFdgH4BzbjHQ1JcvbmavmVmumS3/0fb+ZrbG\nzHLM7MHSr7vKOTcCGAKcW44/g4gn+revy4SR59K/XV3e+XoLn63Y6XUkEb/ypSCOOed+PK92cZmv\n/KmxQP8TN5hZNDAauBhoCww1s7alz10OfAx84uPXF/GUmfHQpW3ISE1i+BsLefazNVrGVCKGLwWx\nwsyuA6LNrIWZPQfM8eWLO+dmUnIH9om6AjnOufXOuaOUTN9xRenrJzjnLgau9/lPIOKxhjWr8OEv\nz2VwZjqjpuZw02vzyf/umNexRCrMl4K4C2hHySWub1My3cavK/CZDYAtJzzeCjQws95mNsrMxnCK\nEYSZDTezBWa2IC8vrwIxRPwnLiaKJweexb39WjIrezfXjJlLcbFGEhLeTnuZq3PuMCUzuj4UyCDO\nuenAdB9e9xLwEpRcxRTITCLlERMdxci+LUhJjOX3H67gqr9/SfsGKVzQpg59WmsJFQk/vszFNI0y\n1n9wzvU9w8/cBjQ84XF66TaRiHDdORnsOXSMaWtymbBkO29+tZlLO9Tjrr7NaV23mtfxRHx22vsg\nzCzzhIcJwCCg0Dl3v08fYNYY+Mg51770cQywFjifkmL4GrjOObfC59All94OaN68+bDs7Gxf3yYS\ndEcLi3nm8zW8NW8zxc4xsm8LRvRqipl5HU0qMV/vg/DpRrkyvvh851xXH173NtAbqAXsAh5xzr1q\nZpcAfwWigdecc0+UOwS6UU7Cx9a9h/nDhyuYujqX1nWTuTozndt6NFFRiCf8VhBmVvOEh1FAJjDK\nOdeqYhHPnEYQEo6cc7y7YCsvzljH+t2H6Ne2Dk8P6kCNJE0nLsHlz4LYQMk5CAMKgQ3A48652f4I\nWhEaQUg4cs7x7OdrGT0thx4t0nj9li4aSUhQ+VoQvlzF1MQ/kUQESm6uu6dfK1KT4nh04kruevsb\nnrjqLFISNau+hBZfrmIaeKrnnXPv+y+OSOVxQ7cMdu4v4NXZ61mXd4hxt3cjOUElIaHDlxvlbgNe\npeTu5uuBV4BbgQHAZYGLdnJmNsDMXsrP//EMICLhIyY6igcvbs2YGzNZtWM/j05YqZvrJKT4UhCx\nQFvn3CDn3CBK7qqOdc7d4py7NbDxyuacm+icG56SkuLFx4v4Vd/Wdbj+nEa8t2grrf8wmVFTdOGF\nhAZfCqKhc27HCY93AY0ClEekUvrjFe0ZNbQTbetV469frGX6mlyvI4n4dBXT80ALSuZhAriGksn2\n7gpwtlNl0mWuEpH2HT7KkDFzyc49SK+WaZzfpg43dsvwOpZEGL/eKGdmVwHnlT6c6Zz7vwrm8wtd\n5iqR6FBBIc98tpaxczZQ7OAPl7Xl1h66mFD8x28rypVaBHzsnLsb+NTMkiuUTkROKik+hj8MaMtX\nv7uAtOR4npq0muXbdEGGBJ8vS44OA8YDY0o3NQA+CGQoEYG05HhGX9cZDC57bjYj3ljI9n3feR1L\nKhFfRhC/pGQJ0P0AzrlsQHMXiwRB1yY1mfNgX4Z2bciU1bvo978z+TJnN0W6HFaCwJeCKChd+Q34\nfjZWT787dR+EVCa1qsbz5MAOfPjLHiTERnP9K19x/jPT+WTZjtO/WaQCfCmIGWb2OyDRzC4E3gUm\nBjbWqek+CKmM2tavxtR7e/HskLM5WljMyLcWMTt7t9exJIL5UhAPAnnAMuB2SpYDfTiQoUSkbNUS\nYhnYOZ0Jd/WgYc0q3PyP+Xy6YqfXsSRCnbIgzCwaeMM597JzbrBz7urS/9YBUBEP1aoazxu3nkNK\nYiy3v7GQRz5czrcHC7yOJRHmlAXhnCsCMsxME9aLhJhGqVWYdm9vbu6ewetzN5H5py94Yfo6r2NJ\nBPHlTup/Am2ACcCh49udc88GNtopM+lOapETLN+WzyMTVrBw0176t6vL34Z2JD4m2utYEqIqfKOc\nmb1R+p+XAx+Vvjb5hF+e0UlqkR9q3yCFN39xDted04jJK3bS58/TGT0th9z9R7yOJmHspCMIM1sJ\nXABMpmRd6R9wzu0JaDIfaKoNkZ96Y+5G/jVvM2t2HSDKoFvTVIb1bEqf1rp9SUpUeC4mM/sVcAfQ\nBNh+4lOAc8419UfQilBBiJTNOUd27kHGL9zK2C83crSomH5t6/DEVWeRlhzvdTzxmD/XpH7BOXeH\n35L5kQpC5PQKCot48pPVvDFvE0XFjgva1ObCtnXo376eljmtpPw6m2uoUkGI+G7m2jzGL9zKtNW5\nHCgoJDE2mpu6Z3BH72ZUr6ILFSsTFYSIlMk5xzdb9jF6ag5TVucSFx3F4Kx0RvRqRsOaVbyOJ0EQ\n0QWhy1xF/GPp1n28PX8L4xduwTkY2LkBvzq/Bek1VBSRLKIL4jiNIET8Y0f+d4yZsZ6xczYSHWVc\n3L4uv7mwJU3TqnodTQJABSEi5bZx9yH+8eUGXp+7CYCsjBr89pLWZGbU9DiZ+JO/V5QTkUqgca0k\nHruiPTPu6829/VqyI/8Ig1+cy5gZmsKjMlJBiMhPZKQmMbJvCyb/uicXtKnDk5NW88D4pczf4Pn9\nsRJEKggROankhFhGDe3ETd0z+HDJNoaMmcugF+YwJ2c3x4qKvY4nAaZzECLik8NHC3nn6y38bUo2\n+w4fIyE2ijt6Nee6cxrp7uwwo5PUIhIQh48W8umKnby/aBuzsncTHWUM6FCPBy9uQ92UBK/jiQ9U\nECIScMu35fP81Bwmr9hJTJRxQ7cMbu/VlHopiV5Hk1NQQYhI0CzavJe/fpHNzLV5AFzVqQG/ubCl\n7swOURFdELqTWiQ0Ld+Wz4sz1vHR0h3ExUQxvGdThvdqSrUETQoYSiK6II7TCEIkNC3Zso/HJq5g\n0eZ9JMfHcNf5zRnUOZ3UqjqZHQpUECLiuU9X7OSB95ay7/AxzGBo10Zc26UhHdKrex2tUlNBiEhI\nKC52zFibx5iZ65i3vuRGu9Z1k3lkQDu6N0v1OF3lpIIQkZCzYfch3p6/mVdmrafYQVJcNBefVY+b\nuzfmrHStMR8sKggRCVl5BwoYv3Arn6/cyaLN+wBoU68at/VowsBODYiKMo8TRjYVhIiEhU3fHuKt\n+Zt566vNHDhSSNt61XjsinZ0aawZZANFBSEiYeVYUTFjZqzjmc/X4hxc0bE+d1/Qksa1kryOFnFU\nECISlnL3H+G37y9jyupcAC5qV4ffXdKGjFQVhb+oIEQkrK3Yns/fp6/j46U7AOjVMo37+7eiXX2d\nzK4oFYSIRIRVO/bz/LSc74uiS+MaPDuko6bxqACtKCciEaFNvWqMvq4zM+7rzZUd6/P1xr1c8OyM\n7wtDAiekCsLMrjSzl83sHTPr53UeEQkdGalJ/PXaTrw7ojsAv3xrEde/Mo9vNu/1OFnkCnhBmNlr\nZpZrZst/tL2/ma0xsxwzexDAOfeBc24YMAK4JtDZRCT8dGlck4W/v5AL2tThy5xvuervc3jkw+Uc\nOVbkdbSIE4wRxFig/4kbzCwaGA1cDLQFhppZ2xNe8nDp8yIiP1E1PoZXbs7ig1+eC8DrczfR8fHP\n2L7vO4+TRZaAF4Rzbibw45XOuwI5zrn1zrmjwL+BK6zE08Ak59yiQGcTkfDWsWF1Vv+xP4Mz0zly\nrJifPTWVsV9u8DpWxPDqHEQDYMsJj7eWbrsLuAC42sxGlPVGMxtuZgvMbEFeXl7gk4pISEuIjebP\ng89m/IjuVK8Sy6MTV3LpqFlk7zrgdbSwF1InqZ1zo5xzmc65Ec65F0/ympecc1nOuay0tLRgRxSR\nEJXVuCbzf3cB13ZpyIrt+7nwf2dy+fOz2bj7kNfRwpZXBbENaHjC4/TSbSIiZywuJoqnBnXgk1/1\npGmtJJZuzaf3X6bz4HtLKSjUSezy8qogvgZamFkTM4sDrgUm+PpmMxtgZi/l5+cHLKCIhK+29asx\n9d7ePDvkbAD+/fUWWj08+fs1s8U3wbjM9W1gLtDKzLaa2W3OuUJgJPApsAoY55xb4evXdM5NdM4N\nT0nRLfcicnIDO6eT/cTF3NgtA4CbXpvPb95ZrNGEjzTVhohUCnPW7ea6l7/6/vHEkT0q7SJFET3V\nhg4xiUh5/axZLVY8dhH929UFYMDzs3n4g2WE8w/JgaYRhIhUOtNW53LL2K+/fzzlnl40S6vqYaLg\niugRhIhIRfRpXZuVj19Ezxa1ADj/mRk8+9kaiovD9wfmQAjLgtAhJhGpqCpxMbxx2zk8M7jkSqdR\nU3No+rtP+NsX2TqJXUqHmESk0tt76CgPf7j8+ynEzWDx7/uRUiXW42SBoUNMIiI+qpEUx+jrOrP0\n0X6cnZ6Cc3D245+xoZLfhR2WBaFDTCISCNUSYvlwZA+uySqZ6KHPX6bz+MSVlfZKJx1iEhEpw7sL\ntnDf+KUAJCfEMPWe3qQlx3ucyj90iElEpAIGZzVk6aP9aFUnmQNHCunyxBdMXb3L61hBpYIQETmJ\nagmxfHr3edxybmMAbh27gIc/WEZhUbG3wYJEBSEichqPDGjHP27pAsC/5m2m+UOTyD98zONUgReW\nBaGT1CISbH1a1Wb1H/vTpXENoOQqp03fRvZVTmFZEJrNVUS8kBAbzbsjfsbAzg0A6PXn6bwya73H\nqQInLAtCRMRLzw7pyNODzgLgTx+v4tynpvLtwQKPU/mfCkJE5Axc06UR0+/tTcOaiWzb9x2Zf/qC\nGRG2IJEKQkTkDDWulcTM+/p8f5XTza/N54Xp67wN5UdhWRA6SS0iocLMeGRAO8bcmAnA05NXM+iF\nORyLgEthw7IgdJJaRELNRe3qMvO+PgAs3LSXnz01NewvhQ3LghARCUWNUquw6vH+1K2WQN6BAs5+\n/DNWbt/vdawzpoIQEfGjxLhovnywL5d1qAfAJaNmsWTLPo9TnRkVhIiIn0VHGc9f15nHr2gHwBWj\nv+Tt+ZvDbooOFYSISIDc1L0xf7yyPQC/fX8ZV/19Dlv2HPY4le9UECIiAXRjtwwm/VdPYqKMZdvy\n6fk/01gcJoecwrIgdJmriISTNvWqkf3ExdzZuxkAV47+kpzcgx6nOr2wLAhd5ioi4cbMuL9/a0b2\naQ7ABc/OYNnW0P4hNywLQkQkXN3TryXXn9MIgAHPz2bc11soKCzyOFXZVBAiIkFkZvzpyvY80L81\nAPe/t5RfvL6AI8dCryRUECIiQWZm3NG7GZN/3ROAWdm7GfbPBWz+NrSucFJBiIh4pHXdaiz6/YVA\nSUnc+dZCFm7a43Gq/1BBiIh4qGZSHEv+0I8rOtZn+bb9DH5xLl9vDI2SUEGIiHgspUosf776bJ64\nqj3FDga/OJev1n/rdSwVhIhIKIiLiWJol0Y8N7QTANe8NI+pq3d5mkkFISISIqKijMs61OPFG0rW\nlrjt9QU8+N5S7/J49skVoDupRSRSmRn929fl6UFn0aJ2VSYs2c4945ZQXOyCniUsC0J3UotIpLum\nSyPuu6g1dVMSeG/RVl6etT7o90qEZUGIiFQGF7atw1MDOxAdZTw5aTUfLd0R1JJQQYiIhLCuTWoy\n+4GSpUzvfXcJff8yHeeCc7hJBSEiEuLqpSTy0o2ZXNSuDtvzj3D3O4uDchlsTMA/QUREKqxfu7qk\nVo0ne9cCBt1/AAAElklEQVRBFm7ey4Vt6wb8M1UQIiJhIjOjBlPv7R20z9MhJhERKZMKQkREyqSC\nEBGRMqkgRESkTCoIEREpkwpCRETKpIIQEZEyqSBERKRMFqw5PQLBzPKATaUPU4AT5/8+3eNawO6A\nBvzpZwbivad73cmeL892r/dlKO/Hkz0XivvxZLn8+T7tR/+9L5B/tzOcc2mnTeCci4hfwEvlfLwg\n2JkC8d7Tve5kz5dnu9f7MpT3o6/7LBT2Y0X2pfZjcPdjRfZlebef6lckHWKaWM7HwVCRz/T1vad7\n3cmeL892r/dlKO/Hkz0XivuxIp+p/eifzyzP+4Lxd/uUwvoQU0WY2QLnXJbXOSKB9qV/aD/6h/aj\n/0TSCKK8XvI6QATRvvQP7Uf/0H70k0o7ghARkVOrzCMIERE5BRWEiIiUSQUhIiJlUkGUMrMkM3vd\nzF42s+u9zhOuzKypmb1qZuO9zhLuzOzK0u/Hd8ysn9d5wpWZtTGzF81svJnd4XWecBLRBWFmr5lZ\nrpkt/9H2/ma2xsxyzOzB0s0DgfHOuWHA5UEPG8LKsx+dc+udc7d5kzT0lXNfflD6/TgCuMaLvKGq\nnPtxlXNuBDAEONeLvOEqogsCGAv0P3GDmUUDo4GLgbbAUDNrC6QDW0pfVhTEjOFgLL7vRzm1sZR/\nXz5c+rz8x1jKsR/N7HLgY+CT4MYMbxFdEM65mcCeH23uCuSU/qR7FPg3cAWwlZKSgAjfL+VVzv0o\np1CefWklngYmOecWBTtrKCvv96RzboJz7mJAh4/LoTL+Q9iA/4wUoKQYGgDvA4PM7AW8uXU/3JS5\nH80s1cxeBDqZ2W+9iRZ2TvY9eRdwAXC1mY3wIliYOdn3ZG8zG2VmY9AIolxivA4QKpxzh4BbvM4R\n7pxz31JyzFwqyDk3ChjldY5w55ybDkz3OEZYqowjiG1AwxMep5duk/LRfvQf7Uv/0H70s8pYEF8D\nLcysiZnFAdcCEzzOFI60H/1H+9I/tB/9LKILwszeBuYCrcxsq5nd5pwrBEYCnwKrgHHOuRVe5gx1\n2o/+o33pH9qPwaHJ+kREpEwRPYIQEZEzp4IQEZEyqSBERKRMKggRESmTCkJERMqkghARkTKpIESC\nyMweNbN7vc4h4gsVhMgZKp1tVX+HJGLpm1ukHMyscemCNP8ElgOvmtkCM1thZo+d8LqNZvaYmS0y\ns2Vm1rqMrzXMzCaZWWIw/wwivtJsriLl1wK42Tk3z8xqOuf2lC5WM8XMOjjnlpa+brdzrrOZ3Qnc\nC/zi+Bcws5HAhcCVzrmCoP8JRHygEYRI+W1yzs0r/e8hZrYI+AZoR8lKZse9X/r7QqDxCdtvomTV\ns6tVDhLKVBAi5XcIwMyaUDIyON8514GSJS0TTnjd8X/8i/jhaH0ZJYWRjkgIU0GInLlqlJRFvpnV\noWRU4ItvgNuBCWZWP1DhRCpKBSFyhpxzSyj5x3418BbwZTneO5uS0cfHZlYrMAlFKkbTfYuISJk0\nghARkTKpIEREpEwqCBERKZMKQkREyqSCEBGRMqkgRESkTCoIEREpkwpCRETK9P9ZZ3xNmOzXCQAA\nAABJRU5ErkJggg==\n",
"text/plain": [
"<matplotlib.figure.Figure at 0x143b48d0>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"# TODO: 从上面的图中能观察到什么样的现象? 这样的一个图的形状跟一个非常著名的函数形状很类似,能所出此定理吗? \n",
"# hint: [XXX]'s law\n",
"from matplotlib import pylab\n",
"rank = [i for i in range(1, len(counting) + 1)]\n",
"pylab.loglog(rank, counting, label='Zipf law')\n",
"pylab.xlabel('rank')\n",
"pylab.ylabel('frequency')\n",
"pylab.show()"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": true
},
"source": [
"#### 1.3 文本预处理\n",
"此部分需要做文本方面的处理。 以下是可以用到的一些方法:\n",
"\n",
"- 1. 停用词过滤 (去网上搜一下 \"english stop words list\",会出现很多包含停用词库的网页,或者直接使用NLTK自带的) \n",
"- 2. 转换成lower_case: 这是一个基本的操作 \n",
"- 3. 去掉一些无用的符号: 比如连续的感叹号!!!, 或者一些奇怪的单词。\n",
"- 4. 去掉出现频率很低的词:比如出现次数少于10,20.... (想一下如何选择阈值)\n",
"- 5. 对于数字的处理: 分词完只有有些单词可能就是数字比如44,415,把所有这些数字都看成是一个单词,这个新的单词我们可以定义为 \"#number\"\n",
"- 6. lemmazation: 在这里不要使用stemming, 因为stemming的结果有可能不是valid word。\n"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"# TODO: 需要做文本方面的处理。 从上述几个常用的方法中选择合适的方法给qlist做预处理(不一定要按照上面的顺序,不一定要全部使用)\n",
"def clean_sentence(sentence_string):\n",
" # replace beyonce or else error at word2vec part\n",
" try:\n",
" sentence_string = re.sub('beyoncé|beyoncés', 'beyonce', sentence_string)\n",
" except:\n",
" pass\n",
" # split sentence from string to list\n",
" sentence_list = sentence_string.split(' ')\n",
" # remove stop word in sentence list\n",
" sentence_list = [word for word in sentence_list if word not in stop_words]\n",
" # lower case the words\n",
" sentence_list = [word.lower() for word in sentence_list]\n",
" # remove punctuation\n",
" sentence_list = [re.sub(r'[^\\w\\s]', '', word) for word in sentence_list]\n",
" # convert integer to #number, put before low frequency or else will be deleted due to low frequency\n",
" sentence_list = identify_number_function(sentence_list)\n",
" # lemmatization words using NLTK\n",
" sentence_list = [lemmatizer.lemmatize(word) for word in sentence_list]\n",
" return sentence_list\n",
"\n",
"def remove_low_freq_word(sentence_list, word_total_list_threshold):\n",
" # remove low frequency word by threshold\n",
" sentence_list = [word for word in sentence_list if word in word_total_list_threshold]\n",
" sentence_list = [word for word in sentence_list if word]\n",
" string_output = list_to_string(sentence_list)\n",
" # print(sentence_list)\n",
" return string_output\n",
"\n",
"# convert list to string\n",
"def list_to_string(input_list):\n",
" string = ' '.join([k for k in input_list])\n",
" return string\n",
"\n",
"# convert list of words to a long string of words to find word frequency and set threshold at later part\n",
"def word_statistics_list(sentence_list):\n",
" word_total_list = []\n",
" for i in sentence_list:\n",
" word_total_list += i # add individual word\n",
" return word_total_list\n",
"\n",
"# import english words to filter out non english words\n",
"import nltk\n",
"nltk_words = set(nltk.corpus.words.words())\n",
"# import stop words\n",
"from nltk.corpus import stopwords\n",
"stop_words = stopwords.words('english')\n",
"\n",
"# setup function to convert integer to #number\n",
"def identify_number_function(sentence):\n",
" output = []\n",
" for i in sentence:\n",
" if i.isdigit():\n",
" output.append('#number')\n",
" else:\n",
" output.append(i)\n",
" return output\n",
"\n",
"# import lemmatizer\n",
"from nltk.stem import WordNetLemmatizer\n",
"lemmatizer = WordNetLemmatizer()\n",
"# qlist = # 更新后的问题列表\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"86821\n",
"86821\n",
"86821\n",
"Empty DataFrame\n",
"Columns: [qlist, alist]\n",
"Index: []\n",
" qlist alist\n",
"0 when beyonce start becoming popular in the late 1990s\n",
"(86821, 2)\n"
]
}
],
"source": [
"# 更新问题列表\n",
"def clean(qlist):\n",
" # 1 for loop to clean sentence\n",
" qlist_1 = []\n",
" counter = 0\n",
" for i in qlist:\n",
" output = clean_sentence(i)\n",
" qlist_1.append(output)\n",
" counter += 1\n",
" # print(counter)\n",
" print(len(qlist_1))\n",
" #######################################\n",
" # 2 get high frequency word list, set threshold for low frequency word\n",
" word_total_list_1 = word_statistics_list(qlist_1)\n",
" # print(len(word_total_list_1))\n",
" counts = collections.Counter(word_total_list_1)\n",
" # print(len(counts))\n",
" threshold = 0\n",
" word_total_list_1_new = [i for i in word_total_list_1 if counts[i] > threshold]\n",
" word_total_list_threshold = list(set(word_total_list_1_new))\n",
" word_total_list_threshold.append('#number')\n",
" # print(word_total_list_threshold)\n",
" ########################################\n",
" # 3 remove low frequency word and lemmatization\n",
" qlist_2 = []\n",
" counter = 0\n",
" for i in qlist_1:\n",
" output = remove_low_freq_word(i, word_total_list_threshold)\n",
" qlist_2.append(output)\n",
" counter += 1\n",
" # print(counter)\n",
" return qlist_2\n",
"\n",
"# read from check point\n",
"# df1 = pd.read_csv(r'E:\\GreedyAI\\Week 4\\qlist.csv')\n",
"qlist = df1['qlist'].tolist()\n",
"qlist_1 = clean(qlist)\n",
"# 更新后的问题列表 check and make sure qlist_2 and alist same length\n",
"print(len(qlist_1))\n",
"print(len(alist))\n",
"# create check point - save to csv file\n",
"df2 = pd.DataFrame(qlist_1)\n",
"df2['qlist'] = qlist_1\n",
"df2['alist'] = alist\n",
"df2 = df2[['qlist', 'alist'\n",
" ]]\n",
"df2.to_csv(r'E:\\NLP第7期\\作业\\project1\\qlist2.csv')\n",
"print(df2.head(0))\n",
"print(df2.head(1))\n",
"print(df2.shape)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 第二部分: 文本的表示\n",
"当我们做完必要的文本处理之后就需要想办法表示文本了,这里有几种方式\n",
"\n",
"- 1. 使用```tf-idf vector```\n",
"- 2. 使用embedding技术如```word2vec```, ```bert embedding```等\n",
"\n",
"下面我们分别提取这三个特征来做对比。 "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 2.1 使用tf-idf表示向量\n",
"把```qlist```中的每一个问题的字符串转换成```tf-idf```向量, 转换之后的结果存储在```X```矩阵里。 ``X``的大小是: ``N* D``的矩阵。 这里``N``是问题的个数(样本个数),\n",
"``D``是词典库的大小"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {
"collapsed": false
},
"outputs": [
{
"ename": "OSError",
"evalue": "Initializing from file failed",
"output_type": "error",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mOSError\u001b[0m Traceback (most recent call last)",
"\u001b[0;32m<ipython-input-13-0f847ab8ad53>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m()\u001b[0m\n\u001b[1;32m 2\u001b[0m \u001b[1;32mimport\u001b[0m \u001b[0mpandas\u001b[0m \u001b[1;32mas\u001b[0m \u001b[0mpd\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m 3\u001b[0m \u001b[1;32mfrom\u001b[0m \u001b[0msklearn\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mfeature_extraction\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mtext\u001b[0m \u001b[1;32mimport\u001b[0m \u001b[0mTfidfVectorizer\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m----> 4\u001b[0;31m \u001b[0mdf_1\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mpd\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mread_csv\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;34mr'E:\\NLP第7期\\作业\\project1\\qlist2.csv'\u001b[0m\u001b[1;33m,\u001b[0m\u001b[0mencoding\u001b[0m\u001b[1;33m=\u001b[0m\u001b[1;34m\"gbk\"\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 5\u001b[0m \u001b[0mqlist_2\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mdf_1\u001b[0m\u001b[1;33m[\u001b[0m\u001b[1;34m'qlist'\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mtolist\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m 6\u001b[0m \u001b[0mvectorizer\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mTfidfVectorizer\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m \u001b[1;31m# 定义一个tf-idf的vectorizer\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
"\u001b[0;32mD:\\anaconda\\lib\\site-packages\\pandas\\io\\parsers.py\u001b[0m in \u001b[0;36mparser_f\u001b[0;34m(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, escapechar, comment, encoding, dialect, tupleize_cols, error_bad_lines, warn_bad_lines, skipfooter, skip_footer, doublequote, delim_whitespace, as_recarray, compact_ints, use_unsigned, low_memory, buffer_lines, memory_map, float_precision)\u001b[0m\n\u001b[1;32m 653\u001b[0m skip_blank_lines=skip_blank_lines)\n\u001b[1;32m 654\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m--> 655\u001b[0;31m \u001b[1;32mreturn\u001b[0m \u001b[0m_read\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mfilepath_or_buffer\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mkwds\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 656\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m 657\u001b[0m \u001b[0mparser_f\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0m__name__\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mname\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
"\u001b[0;32mD:\\anaconda\\lib\\site-packages\\pandas\\io\\parsers.py\u001b[0m in \u001b[0;36m_read\u001b[0;34m(filepath_or_buffer, kwds)\u001b[0m\n\u001b[1;32m 403\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m 404\u001b[0m \u001b[1;31m# Create the parser.\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m--> 405\u001b[0;31m \u001b[0mparser\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mTextFileReader\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mfilepath_or_buffer\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;33m**\u001b[0m\u001b[0mkwds\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 406\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m 407\u001b[0m \u001b[1;32mif\u001b[0m \u001b[0mchunksize\u001b[0m \u001b[1;32mor\u001b[0m \u001b[0miterator\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
"\u001b[0;32mD:\\anaconda\\lib\\site-packages\\pandas\\io\\parsers.py\u001b[0m in \u001b[0;36m__init__\u001b[0;34m(self, f, engine, **kwds)\u001b[0m\n\u001b[1;32m 762\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0moptions\u001b[0m\u001b[1;33m[\u001b[0m\u001b[1;34m'has_index_names'\u001b[0m\u001b[1;33m]\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mkwds\u001b[0m\u001b[1;33m[\u001b[0m\u001b[1;34m'has_index_names'\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m 763\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m--> 764\u001b[0;31m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0m_make_engine\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mengine\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 765\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m 766\u001b[0m \u001b[1;32mdef\u001b[0m \u001b[0mclose\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
"\u001b[0;32mD:\\anaconda\\lib\\site-packages\\pandas\\io\\parsers.py\u001b[0m in \u001b[0;36m_make_engine\u001b[0;34m(self, engine)\u001b[0m\n\u001b[1;32m 983\u001b[0m \u001b[1;32mdef\u001b[0m \u001b[0m_make_engine\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mengine\u001b[0m\u001b[1;33m=\u001b[0m\u001b[1;34m'c'\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m 984\u001b[0m \u001b[1;32mif\u001b[0m \u001b[0mengine\u001b[0m \u001b[1;33m==\u001b[0m \u001b[1;34m'c'\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m--> 985\u001b[0;31m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0m_engine\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mCParserWrapper\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mf\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;33m**\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0moptions\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 986\u001b[0m \u001b[1;32melse\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m 987\u001b[0m \u001b[1;32mif\u001b[0m \u001b[0mengine\u001b[0m \u001b[1;33m==\u001b[0m \u001b[1;34m'python'\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
"\u001b[0;32mD:\\anaconda\\lib\\site-packages\\pandas\\io\\parsers.py\u001b[0m in \u001b[0;36m__init__\u001b[0;34m(self, src, **kwds)\u001b[0m\n\u001b[1;32m 1603\u001b[0m \u001b[0mkwds\u001b[0m\u001b[1;33m[\u001b[0m\u001b[1;34m'allow_leading_cols'\u001b[0m\u001b[1;33m]\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mindex_col\u001b[0m \u001b[1;32mis\u001b[0m \u001b[1;32mnot\u001b[0m \u001b[1;32mFalse\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m 1604\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1605\u001b[0;31m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0m_reader\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mparsers\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mTextReader\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0msrc\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;33m**\u001b[0m\u001b[0mkwds\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 1606\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m 1607\u001b[0m \u001b[1;31m# XXX\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
"\u001b[0;32mpandas\\_libs\\parsers.pyx\u001b[0m in \u001b[0;36mpandas._libs.parsers.TextReader.__cinit__ (pandas\\_libs\\parsers.c:4209)\u001b[0;34m()\u001b[0m\n",
"\u001b[0;32mpandas\\_libs\\parsers.pyx\u001b[0m in \u001b[0;36mpandas._libs.parsers.TextReader._setup_parser_source (pandas\\_libs\\parsers.c:8895)\u001b[0;34m()\u001b[0m\n",
"\u001b[0;31mOSError\u001b[0m: Initializing from file failed"
]
}
],
"source": [
"# TODO \n",
"import pandas as pd\n",
"from sklearn.feature_extraction.text import TfidfVectorizer\n",
"df_1 = pd.read_csv(r'E:\\NLP第7期\\作业\\project1\\qlist2.csv',encoding=\"gbk\")\n",
"qlist_2 = df_1['qlist'].tolist()\n",
"vectorizer = TfidfVectorizer() # 定义一个tf-idf的vectorizer\n",
"#df_2 = pd.read_csv(r'E:\\GreedyAI\\Week 4\\qlist2.csv')\n",
"#qlist_3 = df3['qlist'].tolist()\n",
"X_tfidf = vectorizer.fit_transform('qlist_2') # 结果存放在X矩阵里\n",
"print(X_tfidf.shape)\n",
"\n",
"# vectorizer = # 定义一个tf-idf的vectorizer\n",
"\n",
"# X_tfidf = # 结果存放在X矩阵里"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 2.2 使用wordvec + average pooling\n",
"词向量方面需要下载: https://nlp.stanford.edu/projects/glove/ (请下载``glove.6B.zip``),并使用``d=200``的词向量(200维)。国外网址如果很慢,可以在百度上搜索国内服务器上的。 每个词向量获取完之后,即可以得到一个句子的向量。 我们通过``average pooling``来实现句子的向量。 "
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {
"collapsed": false
},
"outputs": [
{
"ename": "OSError",
"evalue": "Initializing from file failed",
"output_type": "error",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mOSError\u001b[0m Traceback (most recent call last)",
"\u001b[0;32m<ipython-input-15-6786e5350469>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m()\u001b[0m\n\u001b[1;32m 3\u001b[0m \u001b[1;31m# 这需要从文本中读取\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m 4\u001b[0m \u001b[0mglovefile\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mopen\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;34mr\"E:\\NLP第7期\\作业\\第二次作业\\glove.6B\\glove.6B.200d.txt\"\u001b[0m\u001b[1;33m,\u001b[0m\u001b[1;34m\"r\"\u001b[0m\u001b[1;33m,\u001b[0m\u001b[0mencoding\u001b[0m\u001b[1;33m=\u001b[0m\u001b[1;34m\"utf-8\"\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m----> 5\u001b[0;31m \u001b[0mdf_1\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mpd\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mread_csv\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;34mr'E:\\NLP第7期\\作业\\project1\\qlist2.csv'\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 6\u001b[0m \u001b[0mqlist_2\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mdf_1\u001b[0m\u001b[1;33m[\u001b[0m\u001b[1;34m'qlist'\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mtolist\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m 7\u001b[0m \u001b[1;32mdef\u001b[0m \u001b[0mload_embedding\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0minput_text\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m \u001b[1;31m# function to convert embedding text into dictionary\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
"\u001b[0;32mD:\\anaconda\\lib\\site-packages\\pandas\\io\\parsers.py\u001b[0m in \u001b[0;36mparser_f\u001b[0;34m(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, escapechar, comment, encoding, dialect, tupleize_cols, error_bad_lines, warn_bad_lines, skipfooter, skip_footer, doublequote, delim_whitespace, as_recarray, compact_ints, use_unsigned, low_memory, buffer_lines, memory_map, float_precision)\u001b[0m\n\u001b[1;32m 653\u001b[0m skip_blank_lines=skip_blank_lines)\n\u001b[1;32m 654\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m--> 655\u001b[0;31m \u001b[1;32mreturn\u001b[0m \u001b[0m_read\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mfilepath_or_buffer\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mkwds\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 656\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m 657\u001b[0m \u001b[0mparser_f\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0m__name__\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mname\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
"\u001b[0;32mD:\\anaconda\\lib\\site-packages\\pandas\\io\\parsers.py\u001b[0m in \u001b[0;36m_read\u001b[0;34m(filepath_or_buffer, kwds)\u001b[0m\n\u001b[1;32m 403\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m 404\u001b[0m \u001b[1;31m# Create the parser.\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m--> 405\u001b[0;31m \u001b[0mparser\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mTextFileReader\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mfilepath_or_buffer\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;33m**\u001b[0m\u001b[0mkwds\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 406\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m 407\u001b[0m \u001b[1;32mif\u001b[0m \u001b[0mchunksize\u001b[0m \u001b[1;32mor\u001b[0m \u001b[0miterator\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
"\u001b[0;32mD:\\anaconda\\lib\\site-packages\\pandas\\io\\parsers.py\u001b[0m in \u001b[0;36m__init__\u001b[0;34m(self, f, engine, **kwds)\u001b[0m\n\u001b[1;32m 762\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0moptions\u001b[0m\u001b[1;33m[\u001b[0m\u001b[1;34m'has_index_names'\u001b[0m\u001b[1;33m]\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mkwds\u001b[0m\u001b[1;33m[\u001b[0m\u001b[1;34m'has_index_names'\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m 763\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m--> 764\u001b[0;31m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0m_make_engine\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mengine\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 765\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m 766\u001b[0m \u001b[1;32mdef\u001b[0m \u001b[0mclose\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
"\u001b[0;32mD:\\anaconda\\lib\\site-packages\\pandas\\io\\parsers.py\u001b[0m in \u001b[0;36m_make_engine\u001b[0;34m(self, engine)\u001b[0m\n\u001b[1;32m 983\u001b[0m \u001b[1;32mdef\u001b[0m \u001b[0m_make_engine\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mengine\u001b[0m\u001b[1;33m=\u001b[0m\u001b[1;34m'c'\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m 984\u001b[0m \u001b[1;32mif\u001b[0m \u001b[0mengine\u001b[0m \u001b[1;33m==\u001b[0m \u001b[1;34m'c'\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m--> 985\u001b[0;31m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0m_engine\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mCParserWrapper\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mf\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;33m**\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0moptions\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 986\u001b[0m \u001b[1;32melse\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m 987\u001b[0m \u001b[1;32mif\u001b[0m \u001b[0mengine\u001b[0m \u001b[1;33m==\u001b[0m \u001b[1;34m'python'\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
"\u001b[0;32mD:\\anaconda\\lib\\site-packages\\pandas\\io\\parsers.py\u001b[0m in \u001b[0;36m__init__\u001b[0;34m(self, src, **kwds)\u001b[0m\n\u001b[1;32m 1603\u001b[0m \u001b[0mkwds\u001b[0m\u001b[1;33m[\u001b[0m\u001b[1;34m'allow_leading_cols'\u001b[0m\u001b[1;33m]\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mindex_col\u001b[0m \u001b[1;32mis\u001b[0m \u001b[1;32mnot\u001b[0m \u001b[1;32mFalse\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m 1604\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1605\u001b[0;31m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0m_reader\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mparsers\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mTextReader\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0msrc\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;33m**\u001b[0m\u001b[0mkwds\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 1606\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m 1607\u001b[0m \u001b[1;31m# XXX\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
"\u001b[0;32mpandas\\_libs\\parsers.pyx\u001b[0m in \u001b[0;36mpandas._libs.parsers.TextReader.__cinit__ (pandas\\_libs\\parsers.c:4209)\u001b[0;34m()\u001b[0m\n",
"\u001b[0;32mpandas\\_libs\\parsers.pyx\u001b[0m in \u001b[0;36mpandas._libs.parsers.TextReader._setup_parser_source (pandas\\_libs\\parsers.c:8895)\u001b[0;34m()\u001b[0m\n",
"\u001b[0;31mOSError\u001b[0m: Initializing from file failed"
]
}
],
"source": [
"# TODO 基于Glove向量获取句子向量\n",
"# emb = # 这是 D*H的矩阵,这里的D是词典库的大小, H是词向量的大小。 这里面我们给定的每个单词的词向量,\n",
" # 这需要从文本中读取\n",
"glovefile = open(r\"E:\\NLP第7期\\作业\\第二次作业\\glove.6B\\glove.6B.200d.txt\",\"r\",encoding=\"utf-8\")\n",
"df_1 = pd.read_csv(r'E:\\NLP第7期\\作业\\project1\\qlist2.csv')\n",
"qlist_2 = df_1['qlist'].tolist()\n",
"def load_embedding(input_text): # function to convert embedding text into dictionary\n",
" word_emds = {} # build a dictionary\n",
" for line in input_text:\n",
" embedding = line.split(\" \") # split the line by space\n",
" vec = list(map(float, embedding[1:])) # apply float to all vectors, start from 1, as eles[0] is the word\n",
" word_emds[embedding[0]] = vec # build dictionary\n",
" return word_emds\n",
"\n",
"# 读取glove的预训练词向量\n",
"word_emds = load_embedding(glovefile)\n",
"# separate out from loading Glove, or else each time troubleshooting have to rerun Glove ....\n",
"\n",
"# create a for loop to download vector for each word from Glove\n",
"X_w2v = [] #the matrix for\n",
"counter = 0\n",
"\n",
"for sentence in qlist_2:\n",
" word_list = sentence.split(' ')\n",
" word_embs1 = []\n",
" for word in word_list:\n",
" if word and word_emds[word]:\n",
" word_embs1.append(word_emds[word])\n",
" else:\n",
" word_embs1.append(['0']*200)\n",
" X_w2v.append(word_embs1)\n",
" counter += 1\n",
"print(counter) \n",
"# X_w2v = # 初始化完emb之后就可以对每一个句子来构建句子向量了,这个过程使用average pooling来实现\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 2.3 使用BERT + average pooling\n",
"最近流行的BERT也可以用来学出上下文相关的词向量(contex-aware embedding), 在很多问题上得到了比较好的结果。在这里,我们不做任何的训练,而是直接使用已经训练好的BERT embedding。 具体如何训练BERT将在之后章节里体会到。 为了获取BERT-embedding,可以直接下载已经训练好的模型从而获得每一个单词的向量。可以从这里获取: https://github.com/imgarylai/bert-embedding , 请使用```bert_12_768_12```\t当然,你也可以从其他source获取也没问题,只要是合理的词向量。 "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Vocab file is not found. Downloading.\n",
"Downloading C:\\Users\\Administrator\\.mxnet\\models\\book_corpus_wiki_en_cased-2d62af22.zip from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/vocab/book_corpus_wiki_en_cased-2d62af22.zip...\n",
"Downloading C:\\Users\\Administrator\\.mxnet\\models\\bert_24_1024_16_book_corpus_wiki_en_cased-4e685a96.zip from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/bert_24_1024_16_book_corpus_wiki_en_cased-4e685a96.zip...\n",
"download failed due to ChunkedEncodingError(ProtocolError('Connection broken: OSError(\"(10054, \\'WSAECONNRESET\\')\",)', OSError(\"(10054, 'WSAECONNRESET')\",)),), retrying, 4 attempts left\n",
"Downloading C:\\Users\\Administrator\\.mxnet\\models\\bert_24_1024_16_book_corpus_wiki_en_cased-4e685a96.zip from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/bert_24_1024_16_book_corpus_wiki_en_cased-4e685a96.zip...\n"
]
}
],
"source": [
"# TODO 基于BERT的句子向量计算\n",
"from bert_embedding import BertEmbedding #下载不下来\n",
"bert_embedding = BertEmbedding(model='bert_24_1024_16', dataset_name='book_corpus_wiki_en_cased')\n",
"bert_embedding = BertEmbedding()\n",
"\n",
"X_bert = [] # 每一个句子的向量结果存放在X_bert矩阵里。行数为句子的总个数,列数为一个句子embedding大小。\n",
"counter = 0\n",
"\n",
"for sentence in qlist_2:\n",
" result = bert_embedding(sentence)\n",
" X_bert.append(result)\n",
" counter += 1\n",
" print(counter)\n",
"\n",
"print('counter', counter)\n",
"# X_bert = # 每一个句子的向量结果存放在X_bert矩阵里。行数为句子的总个数,列数为一个句子embedding大小。 "
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": true
},
"source": [
"### 第三部分: 相似度匹配以及搜索\n",
"在这部分里,我们需要把用户每一个输入跟知识库里的每一个问题做一个相似度计算,从而得出最相似的问题。但对于这个问题,时间复杂度其实很高,所以我们需要结合倒排表来获取相似度最高的问题,从而获得答案。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 3.1 tf-idf + 余弦相似度\n",
"我们可以直接基于计算出来的``tf-idf``向量,计算用户最新问题与库中存储的问题之间的相似度,从而选择相似度最高的问题的答案。这个方法的复杂度为``O(N)``, ``N``是库中问题的个数。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def get_top_results_tfidf_noindex(query):\n",
" # TODO 需要编写\n",
" \"\"\"\n",
" 给定用户输入的问题 query, 返回最有可能的TOP 5问题。这里面需要做到以下几点:\n",
" 1. 对于用户的输入 query 首先做一系列的预处理(上面提到的方法),然后再转换成tf-idf向量(利用上面的vectorizer)\n",
" 2. 计算跟每个库里的问题之间的相似度\n",
" 3. 找出相似度最高的top5问题的答案\n",
" \"\"\"\n",
" \n",
" top_idxs = [] # top_idxs存放相似度最高的(存在qlist里的)问题的下标 \n",
" # hint: 请使用 priority queue来找出top results. 思考为什么可以这么做? \n",
" \n",
" return alist[top_idxs] # 返回相似度最高的问题对应的答案,作为TOP5答案 "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"# TODO: 编写几个测试用例,并输出结果\n",
"print (get_top_results_tfidf_noindex(\"\"))\n",
"print (get_top_results_tfidf_noindex(\"\"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"你会发现上述的程序很慢,没错! 是因为循环了所有库里的问题。为了优化这个过程,我们需要使用一种数据结构叫做```倒排表```。 使用倒排表我们可以把单词和出现这个单词的文档做关键。 之后假如要搜索包含某一个单词的文档,即可以非常快速的找出这些文档。 在这个QA系统上,我们首先使用倒排表来快速查找包含至少一个单词的文档,然后再进行余弦相似度的计算,即可以大大减少```时间复杂度```。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 3.2 倒排表的创建\n",
"倒排表的创建其实很简单,最简单的方法就是循环所有的单词一遍,然后记录每一个单词所出现的文档,然后把这些文档的ID保存成list即可。我们可以定义一个类似于```hash_map```, 比如 ``inverted_index = {}``, 然后存放包含每一个关键词的文档出现在了什么位置,也就是,通过关键词的搜索首先来判断包含这些关键词的文档(比如出现至少一个),然后对于candidates问题做相似度比较。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"# TODO 请创建倒排表\n",
"inverted_idx = {} # 定一个一个简单的倒排表,是一个map结构。 循环所有qlist一遍就可以"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 3.3 语义相似度\n",
"这里有一个问题还需要解决,就是语义的相似度。可以这么理解: 两个单词比如car, auto这两个单词长得不一样,但从语义上还是类似的。如果只是使用倒排表我们不能考虑到这些单词之间的相似度,这就导致如果我们搜索句子里包含了``car``, 则我们没法获取到包含auto的所有的文档。所以我们希望把这些信息也存下来。那这个问题如何解决呢? 其实也不难,可以提前构建好相似度的关系,比如对于``car``这个单词,一开始就找好跟它意思上比较类似的单词比如top 10,这些都标记为``related words``。所以最后我们就可以创建一个保存``related words``的一个``map``. 比如调用``related_words['car']``即可以调取出跟``car``意思上相近的TOP 10的单词。 \n",
"\n",
"那这个``related_words``又如何构建呢? 在这里我们仍然使用``Glove``向量,然后计算一下俩俩的相似度(余弦相似度)。之后对于每一个词,存储跟它最相近的top 10单词,最终结果保存在``related_words``里面。 这个计算需要发生在离线,因为计算量很大,复杂度为``O(V*V)``, V是单词的总数。 \n",
"\n",
"这个计算过程的代码请放在``related.py``的文件里,然后结果保存在``related_words.txt``里。 我们在使用的时候直接从文件里读取就可以了,不用再重复计算。所以在此notebook里我们就直接读取已经计算好的结果。 作业提交时需要提交``related.py``和``related_words.txt``文件,这样在使用的时候就不再需要做这方面的计算了。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"# TODO 读取语义相关的单词\n",
"def get_related_words(file):\n",
" \n",
" return related_words\n",
"\n",
"related_words = get_related_words('related_words.txt') # 直接放在文件夹的根目录下,不要修改此路径。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 3.4 利用倒排表搜索\n",
"在这里,我们使用倒排表先获得一批候选问题,然后再通过余弦相似度做精准匹配,这样一来可以节省大量的时间。搜索过程分成两步:\n",
"\n",
"- 使用倒排表把候选问题全部提取出来。首先,对输入的新问题做分词等必要的预处理工作,然后对于句子里的每一个单词,从``related_words``里提取出跟它意思相近的top 10单词, 然后根据这些top词从倒排表里提取相关的文档,把所有的文档返回。 这部分可以放在下面的函数当中,也可以放在外部。\n",
"- 然后针对于这些文档做余弦相似度的计算,最后排序并选出最好的答案。\n",
"\n",
"可以适当定义自定义函数,使得减少重复性代码"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def get_top_results_tfidf(query):\n",
" \"\"\"\n",
" 给定用户输入的问题 query, 返回最有可能的TOP 5问题。这里面需要做到以下几点:\n",
" 1. 利用倒排表来筛选 candidate (需要使用related_words). \n",
" 2. 对于候选文档,计算跟输入问题之间的相似度\n",
" 3. 找出相似度最高的top5问题的答案\n",
" \"\"\"\n",
" \n",
" top_idxs = [] # top_idxs存放相似度最高的(存在qlist里的)问题的下表 \n",
" # hint: 利用priority queue来找出top results. 思考为什么可以这么做? \n",
" \n",
" return alist[top_idxs] # 返回相似度最高的问题对应的答案,作为TOP5答案"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def get_top_results_w2v(query):\n",
" \"\"\"\n",
" 给定用户输入的问题 query, 返回最有可能的TOP 5问题。这里面需要做到以下几点:\n",
" 1. 利用倒排表来筛选 candidate (需要使用related_words). \n",
" 2. 对于候选文档,计算跟输入问题之间的相似度\n",
" 3. 找出相似度最高的top5问题的答案\n",
" \"\"\"\n",
" \n",
" top_idxs = [] # top_idxs存放相似度最高的(存在qlist里的)问题的下表 \n",
" # hint: 利用priority queue来找出top results. 思考为什么可以这么做? \n",
" \n",
" return alist[top_idxs] # 返回相似度最高的问题对应的答案,作为TOP5答案"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def get_top_results_bert(query):\n",
" \"\"\"\n",
" 给定用户输入的问题 query, 返回最有可能的TOP 5问题。这里面需要做到以下几点:\n",
" 1. 利用倒排表来筛选 candidate (需要使用related_words). \n",
" 2. 对于候选文档,计算跟输入问题之间的相似度\n",
" 3. 找出相似度最高的top5问题的答案\n",
" \"\"\"\n",
" \n",
" top_idxs = [] # top_idxs存放相似度最高的(存在qlist里的)问题的下表 \n",
" # hint: 利用priority queue来找出top results. 思考为什么可以这么做? \n",
" \n",
" return alist[top_idxs] # 返回相似度最高的问题对应的答案,作为TOP5答案"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"# TODO: 编写几个测试用例,并输出结果\n",
"\n",
"test_query1 = \"\"\n",
"test_query2 = \"\"\n",
"\n",
"print (get_top_results_tfidf(test_query1))\n",
"print (get_top_results_w2v(test_query1))\n",
"print (get_top_results_bert(test_query1))\n",
"\n",
"print (get_top_results_tfidf(test_query2))\n",
"print (get_top_results_w2v(test_query2))\n",
"print (get_top_results_bert(test_query2))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 4. 拼写纠错\n",
"其实用户在输入问题的时候,不能期待他一定会输入正确,有可能输入的单词的拼写错误的。这个时候我们需要后台及时捕获拼写错误,并进行纠正,然后再通过修正之后的结果再跟库里的问题做匹配。这里我们需要实现一个简单的拼写纠错的代码,然后自动去修复错误的单词。\n",
"\n",
"这里使用的拼写纠错方法是课程里讲过的方法,就是使用noisy channel model。 我们回想一下它的表示:\n",
"\n",
"$c^* = \\text{argmax}_{c\\in candidates} ~~p(c|s) = \\text{argmax}_{c\\in candidates} ~~p(s|c)p(c)$\n",
"\n",
"这里的```candidates```指的是针对于错误的单词的候选集,这部分我们可以假定是通过edit_distance来获取的(比如生成跟当前的词距离为1/2的所有的valid 单词。 valid单词可以定义为存在词典里的单词。 ```c```代表的是正确的单词, ```s```代表的是用户错误拼写的单词。 所以我们的目的是要寻找出在``candidates``里让上述概率最大的正确写法``c``。 \n",
"\n",
"$p(s|c)$,这个概率我们可以通过历史数据来获得,也就是对于一个正确的单词$c$, 有百分之多少人把它写成了错误的形式1,形式2... 这部分的数据可以从``spell_errors.txt``里面找得到。但在这个文件里,我们并没有标记这个概率,所以可以使用uniform probability来表示。这个也叫做channel probability。\n",
"\n",
"$p(c)$,这一项代表的是语言模型,也就是假如我们把错误的$s$,改造成了$c$, 把它加入到当前的语句之后有多通顺?在本次项目里我们使用bigram来评估这个概率。 举个例子: 假如有两个候选 $c_1, c_2$, 然后我们希望分别计算出这个语言模型的概率。 由于我们使用的是``bigram``, 我们需要计算出两个概率,分别是当前词前面和后面词的``bigram``概率。 用一个例子来表示:\n",
"\n",
"给定: ``We are go to school tomorrow``, 对于这句话我们希望把中间的``go``替换成正确的形式,假如候选集里有个,分别是``going``, ``went``, 这时候我们分别对这俩计算如下的概率:\n",
"$p(going|are)p(to|going)$和 $p(went|are)p(to|went)$, 然后把这个概率当做是$p(c)$的概率。 然后再跟``channel probability``结合给出最终的概率大小。\n",
"\n",
"那这里的$p(are|going)$这些bigram概率又如何计算呢?答案是训练一个语言模型! 但训练一个语言模型需要一些文本数据,这个数据怎么找? 在这次项目作业里我们会用到``nltk``自带的``reuters``的文本类数据来训练一个语言模型。当然,如果你有资源你也可以尝试其他更大的数据。最终目的就是计算出``bigram``概率。 "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 4.1 训练一个语言模型\n",
"在这里,我们使用``nltk``自带的``reuters``数据来训练一个语言模型。 使用``add-one smoothing``"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"from nltk.corpus import reuters\n",
"\n",
"# 读取语料库的数据\n",
"categories = reuters.categories()\n",
"corpus = reuters.sents(categories=categories)\n",
"\n",
"# 循环所有的语料库并构建bigram probability. bigram[word1][word2]: 在word1出现的情况下下一个是word2的概率。 \n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 4.2 构建Channel Probs\n",
"基于``spell_errors.txt``文件构建``channel probability``, 其中$channel[c][s]$表示正确的单词$c$被写错成$s$的概率。 "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"# TODO 构建channel probability \n",
"channel = {}\n",
"\n",
"for line in open('spell-errors.txt'):\n",
" # TODO\n",
"\n",
"# TODO\n",
"\n",
"print(channel) "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 4.3 根据错别字生成所有候选集合\n",
"给定一个错误的单词,首先生成跟这个单词距离为1或者2的所有的候选集合。 这部分的代码我们在课程上也讲过,可以参考一下。 "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def generate_candidates(word):\n",
" # 基于拼写错误的单词,生成跟它的编辑距离为1或者2的单词,并通过词典库的过滤。\n",
" # 只留写法上正确的单词。 \n",
" \n",
" \n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 4.4 给定一个输入,如果有错误需要纠正\n",
"\n",
"给定一个输入``query``, 如果这里有些单词是拼错的,就需要把它纠正过来。这部分的实现可以简单一点: 对于``query``分词,然后把分词后的每一个单词在词库里面搜一下,假设搜不到的话可以认为是拼写错误的! 人如果拼写错误了再通过``channel``和``bigram``来计算最适合的候选。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"import numpy as np\n",
"\n",
"def spell_corrector(line):\n",
" # 1. 首先做分词,然后把``line``表示成``tokens``\n",
" # 2. 循环每一token, 然后判断是否存在词库里。如果不存在就意味着是拼写错误的,需要修正。 \n",
" # 修正的过程就使用上述提到的``noisy channel model``, 然后从而找出最好的修正之后的结果。 \n",
" \n",
" return newline # 修正之后的结果,假如用户输入没有问题,那这时候``newline = line``\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 4.5 基于拼写纠错算法,实现用户输入自动矫正\n",
"首先有了用户的输入``query``, 然后做必要的处理把句子转换成tokens的形状,然后对于每一个token比较是否是valid, 如果不是的话就进行下面的修正过程。 "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"test_query1 = \"\" # 拼写错误的\n",
"test_query2 = \"\" # 拼写错误的\n",
"\n",
"test_query1 = spell_corector(test_query1)\n",
"test_query2 = spell_corector(test_query2)\n",
"\n",
"print (get_top_results_tfidf(test_query1))\n",
"print (get_top_results_w2v(test_query1))\n",
"print (get_top_results_bert(test_query1))\n",
"\n",
"print (get_top_results_tfidf(test_query2))\n",
"print (get_top_results_w2v(test_query2))\n",
"print (get_top_results_bert(test_query2))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 附录 \n",
"在本次项目中我们实现了一个简易的问答系统。基于这个项目,我们其实可以有很多方面的延伸。\n",
"- 在这里,我们使用文本向量之间的余弦相似度作为了一个标准。但实际上,我们也可以基于基于包含关键词的情况来给一定的权重。比如一个单词跟related word有多相似,越相似就意味着相似度更高,权重也会更大。 \n",
"- 另外 ,除了根据词向量去寻找``related words``也可以提前定义好同义词库,但这个需要大量的人力成本。 \n",
"- 在这里,我们直接返回了问题的答案。 但在理想情况下,我们还是希望通过问题的种类来返回最合适的答案。 比如一个用户问:“明天北京的天气是多少?”, 那这个问题的答案其实是一个具体的温度(其实也叫做实体),所以需要在答案的基础上做进一步的抽取。这项技术其实是跟信息抽取相关的。 \n",
"- 对于词向量,我们只是使用了``average pooling``, 除了average pooling,我们也还有其他的经典的方法直接去学出一个句子的向量。\n",
"- 短文的相似度分析一直是业界和学术界一个具有挑战性的问题。在这里我们使用尽可能多的同义词来提升系统的性能。但除了这种简单的方法,可以尝试其他的方法比如WMD,或者适当结合parsing相关的知识点。 "
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": true
},
"source": [
"好了,祝你好运! "
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.0"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment