Commit e57c3e70 by 20210509028

Upload New File

parent 8f5c1b4c
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 智能问答系统(主文件)\n",
"\n",
"在这里我们来搭建一个轻量级智能问答系统,所需要的模块,包括:\n",
"- 文本预处理:这部分已经帮大家写好,只需要看看代码就可以了。\n",
"- 搭建意图识别分类器:这部分也给大家写好了,使用fastText来做的意图识别器\n",
"- 倒排表:这部分大家需要自己去创建,同时也需要考虑相似的单词(课程视频中讲过)\n",
"- 排序:基于倒排表返回的结果,我们再根据余弦相似度来计算query跟候选问题之间的相似度,最后返回相似度最高的问题的答案。这里,我们将使用BERT来表示句子的向量。 "
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"## %env KMP_DUPLICATE_LIB_OK=TRUE "
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"#导入包\n",
"import pandas as pd\n",
"from tqdm import tqdm\n",
"import numpy as np\n",
"import pickle\n",
"import emoji\n",
"import re\n",
"import jieba\n",
"import torch\n",
"import fasttext\n",
"from sys import platform\n",
"from torch.utils.data import DataLoader\n",
"from transformers import BertTokenizer\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>question</th>\n",
" <th>answer</th>\n",
" <th>question_after_preprocessing</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>买二份有没有少点呀</td>\n",
" <td>亲亲真的不好意思我们已经是优惠价了呢小本生意请亲谅解</td>\n",
" <td>[买]</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>那就等你们处理喽</td>\n",
" <td>好的亲退了</td>\n",
" <td>[]</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>那我不喜欢</td>\n",
" <td>颜色的话一般茶刀茶针和二合一的话都是红木檀和黑木檀哦</td>\n",
" <td>[]</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>不是免运费</td>\n",
" <td>本店茶具订单满99包邮除宁夏青海内蒙古海南新疆西藏满39包邮</td>\n",
" <td>[运费]</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4</th>\n",
" <td>好吃吗</td>\n",
" <td>好吃的</td>\n",
" <td>[好吃]</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" question answer question_after_preprocessing\n",
"0 买二份有没有少点呀 亲亲真的不好意思我们已经是优惠价了呢小本生意请亲谅解 [买]\n",
"1 那就等你们处理喽 好的亲退了 []\n",
"2 那我不喜欢 颜色的话一般茶刀茶针和二合一的话都是红木檀和黑木檀哦 []\n",
"3 不是免运费 本店茶具订单满99包邮除宁夏青海内蒙古海南新疆西藏满39包邮 [运费]\n",
"4 好吃吗 好吃的 [好吃]"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# 读取已经处理好的数据: 导入在preprocessor.ipynb中生成的data/question_answer_pares.pkl文件,并将其保存在变量QApares中\n",
"with open('C:/Users/cuishufeng-ghq/Documents/tanxin/wenda/data/question_answer_pares.pkl','rb') as f:\n",
" QApares = pickle.load(f)\n",
"QApares.head()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[4, 25, 25, 180, 315, 315, 395, 647, 671, 872, 879]"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# 导入在Retrieve.ipynb中生成的data/retrieve/invertedList.pkl倒排表文件,并将其保存在变量invertedList中\n",
"with open('C:/Users/cuishufeng-ghq/Documents/tanxin/wenda/data/retrieve/invertedList.pkl','rb') as f:\n",
" invertedList = pickle.load(f)\n",
"invertedList['好吃']"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"# 这一格的内容是从preprocessor.ipynb中粘贴而来,包含了数据预处理的几个关键函数,这部分用来处理input query string\n",
"import emoji\n",
"import re\n",
"import jieba\n",
"def clean(content):\n",
" content = emoji.demojize(content)\n",
" content = re.sub('<.*>','',content)\n",
" return content\n",
"def question_cut(content):\n",
" return list(jieba.cut(content)) # 将分词结果转换为列表\n",
"def strip(wordList):\n",
" return [word.strip() for word in wordList if word.strip()!=''] # 去掉wordList的空格回车换行制表等符号\n",
"with open(\"C:/Users/cuishufeng-ghq/Documents/tanxin/wenda/data/stopWord.json\",\"r\",encoding=\"utf-8\") as f:\n",
" stopWords = f.read().split(\"\\n\") # 引入停用词\n",
" \n",
"def rm_stop_word(wordList):\n",
" return [word for word in wordList if word not in stopWords] # 去除停用词\n",
"\n",
"def text_processing(sentence): #该函数没有返回结果\n",
" sentence = clean(sentence) ## 转换表情符号和剔除html符号\n",
" sentence = question_cut(sentence) ## 使用结巴进行分词,并将分词结果转换成列表\n",
" sentence = strip(sentence) ## 去除分词后的结果中制表符、换行符、空格等符号\n",
" sentence = rm_stop_word(sentence) ## 去除停用词\n",
" return sentence # 我认为缺失的\n",
"# 这一格是从Retrieve中粘贴而来,用于生成与输入数据较相近的一些候选问题的index\n",
"def get_retrieve_result(sentence):\n",
" \"\"\"\n",
" 基于输入句子,并利用倒排表返回candidate sentence ids\n",
" \"\"\"\n",
" sentence = text_processing(sentence)\n",
" candidate = set()\n",
" for word in sentence:\n",
" if word in invertedList:\n",
" candidate = candidate | set(invertedList[word])\n",
" return candidate"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Building prefix dict from the default dictionary ...\n",
"Loading model from cache C:\\Users\\CUISHU~1\\AppData\\Local\\Temp\\jieba.cache\n",
"Loading model cost 1.127 seconds.\n",
"Prefix dict has been built successfully.\n"
]
},
{
"data": {
"text/plain": [
"[264,\n",
" 142,\n",
" 401,\n",
" 18,\n",
" 150,\n",
" 152,\n",
" 665,\n",
" 921,\n",
" 413,\n",
" 800,\n",
" 290,\n",
" 932,\n",
" 38,\n",
" 560,\n",
" 179,\n",
" 947,\n",
" 182]"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"#测试备选集合函数\n",
"list(get_retrieve_result(\"没钱怎么吃饭\"))[3:20]"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Warning : `load_model` does not return WordVectorModel or SupervisedModel any more, but a `FastText` object which is very similar.\n"
]
}
],
"source": [
"# 加载训练好的fasttext模型用于意图识别\n",
"intention = fasttext.load_model('C:/Users/cuishufeng-ghq/Documents/tanxin/wenda/model/fasttext.ftz')\n",
"\n",
"def get_intention_result(sentence):\n",
" '''\n",
" 输入句子,返回意图识别结果\n",
" 入参:\n",
" sentence:输入的句子\n",
" 出参:\n",
" fasttext_label:fasttext模型的输出,共有两种结果:__label__0和__label__1。__label__0表示闲聊型,__label__1表示任务型\n",
" '''\n",
" sentence = text_processing(sentence)\n",
" sentence = ' '.join(sentence)\n",
" fasttext_label = intention.predict(sentence)[0][0]\n",
" return fasttext_label"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'__label__1'"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"#测试意图识别函数\n",
"get_intention_result('我 想 买包 , 还 要 吃 美食')"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"#导入与bert embedding相关的包,关于mxnet包下载的注意事项参考实验手册\n",
"from bert_embedding import BertEmbedding\n",
"import mxnet\n",
"ctx = mxnet.cpu(0)\n",
"embedding = BertEmbedding(model='bert_12_768_12', dataset_name='wiki_cn_cased', ctx=ctx)\n",
"#计算bert句向量\n",
"def bert_embedding_averaging(sentence):\n",
" \"\"\"返回sentence bert 句向量\"\"\"\n",
" tokens, token_embeddings = embedding([sentence])[0]\n",
" return np.mean(np.array(token_embeddings), axis=0).astype(np.float32)\n",
"\n",
"#计算余弦相似度\n",
"def cos_sim(vector_a, vector_b): \n",
" vector_a = np.mat(vector_a)\n",
" vector_b = np.mat(vector_b)\n",
" num = float(vector_a * vector_b.T)\n",
" denom = np.linalg.norm(vector_a) * np.linalg.norm(vector_b)\n",
" cos = num / denom\n",
" sim = 0.5 + 0.5 * cos\n",
" return sim"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"def get_best_answer(sentence):\n",
" \"\"\"\n",
" sentence: 用户输入query, 已经处理好的\n",
" candidate: 通过倒排表返回的候选问题的下标列表\n",
" 返回:最佳回复,string形式\n",
" \"\"\"\n",
" ## TODO: 你需要完成这部分\n",
" ## 计算query跟每一个候选问题之间的余弦相似度,最后排序\n",
" ## 每个query跟候选问题首先要转化成向量形式,这里使用BERT embedding (可参考第一次项目作业)。 如果你想尝试,其他的embedding方法,也可以自行尝试如tf-idf\n",
" \n",
" #获得输入句子的向量\n",
" input_vector=bert_embedding_averaging(sentence)\n",
" candidate=list(get_retrieve_result(sentence))\n",
" answer={}\n",
" for i in range(len(candidate)):\n",
" sentence_list=QApares.question_after_preprocessing[candidate[i]]\n",
" sentence_str=\" \".join(sentence_list)\n",
" sentence_vector=bert_embedding_averaging(sentence_str)\n",
" similarity=cos_sim(input_vector,sentence_vector)\n",
" answer[candidate[i]]=similarity\n",
" best_answer_key=max(zip(answer.keys(),answer.values()))[0]\n",
" best_answer=QApares.answer[best_answer_key]\n",
" best_question=QApares.question[best_answer_key]\n",
" return \"最相似问题: \"+best_question,\"最佳答案: \"+best_answer\n"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"def QA(sentence):\n",
" '''\n",
" 实现一个智能客服系统,输入一个句子sentence,返回一个回答\n",
" '''\n",
" # 若意图识别结果为闲聊型,则默认返回'闲聊机器人'\n",
" if get_intention_result(sentence)=='__label__0':\n",
" return '闲聊机器人'\n",
" # 根据倒排表进行检索获得候选问题集\n",
" candidate = get_retrieve_result(sentence)\n",
" # 若候选问题集大小为0,默认返回'我不明白你在说什么'\n",
" if len(candidate)==0:\n",
" return '我不明白你在说什么'\n",
" \n",
" return get_best_answer(sentence)"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Wall time: 22.4 s\n"
]
},
{
"data": {
"text/plain": [
"('最相似问题: 最好今天发', '最佳答案: 帮亲备注优先发货了哦')"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"%%time\n",
"# 测试\n",
"QA('发什么快递')"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Wall time: 3.01 s\n"
]
},
{
"data": {
"text/plain": [
"('最相似问题: 我之前白天催了好几次都不发货也不退款', '最佳答案: 不好客官呢明天白天帮您处理哦')"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"%%time\n",
"# 测试\n",
"QA('怎么退款')"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Wall time: 52.2 ms\n"
]
},
{
"data": {
"text/plain": [
"'我不明白你在说什么'"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"%%time\n",
"# 测试\n",
"QA('这个商品有优惠券吗')"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Wall time: 21 ms\n"
]
},
{
"data": {
"text/plain": [
"'我不明白你在说什么'"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"%%time\n",
"# 测试\n",
"QA('一二三四五六七')"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Wall time: 3.87 s\n"
]
},
{
"data": {
"text/plain": [
"('最相似问题: 本来还想说如果好吃下次再来你家买呢但没有核桃夹子多不方便啊', '最佳答案: 亲亲我们是手剥的哦皮很薄哦')"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"%%time\n",
"# 测试\n",
"QA('好吃不')"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Wall time: 18.8 s\n"
]
},
{
"data": {
"text/plain": [
"('最相似问题: 今天发货吗', '最佳答案: 快递晚上过来揽件的哦亲亲明天关注下物流还没有消息的话这边您再联系我们哦')"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"%%time\n",
"# 测试\n",
"QA(\"发货了么\")"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Wall time: 2.76 s\n"
]
},
{
"data": {
"text/plain": [
"('最相似问题: 明明是你们好几天没发货', '最佳答案: 麻烦您的修改下退款申请哦改成不想要或者其他哦')"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"%%time\n",
"# 测试\n",
"QA(\"几天能到\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"一些感触:<br>\n",
" 1.存储好中间结果会提升模型训练速度,如倒排表结果,预训练好的词向量等;<br>\n",
" 2.有些答案就在问题里<br>\n",
" 3.有些答案还是与实际相差较大,这可能与数据量太小有缘故<br>\n",
" 4.数据量不是很大,相似性倒排表不够大;<br>\n",
" 5.运行速度还是不够快,这么小的集合还是需要3s以上,<br>\n",
" 6.意图识别没有利用好<br>\n",
" 7.有警告,不知是什么含义,对结果有什么影响,不知道怎么修改<br>\n",
" 8.写这个代码用了一天时间,不知道别人都是花多长时间<br>\n",
" 9.加了分支以后复杂性会呈几何基数递增<br>\n",
" 10.一轮对话尚且比较困难,多轮对话难度应该也是几何基数递增<br>"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.4"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment