QA_main.ipynb
20.4 KB
智能问答系统(主文件)
在这里我们来搭建一个轻量级智能问答系统,所需要的模块,包括:
- 文本预处理:这部分已经帮大家写好,只需要看看代码就可以了。
- 搭建意图识别分类器:这部分也给大家写好了,使用fastText来做的意图识别器
- 倒排表:这部分大家需要自己去创建,同时也需要考虑相似的单词(课程视频中讲过)
- 排序:基于倒排表返回的结果,我们再根据余弦相似度来计算query跟候选问题之间的相似度,最后返回相似度最高的问题的答案。这里,我们将使用BERT来表示句子的向量。
In [1]:
import os os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE"
In [2]:
import pandas as pd from tqdm import tqdm import numpy as np import pickle import emoji import re import jieba import torch import fasttext from sys import platform
Out [2]:
C:\Users\avaws\anaconda3\envs\nlp_gensim\lib\site-packages\tqdm\auto.py:22: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html from .autonotebook import tqdm as notebook_tqdm
In [3]:
# 读取已经处理好的数据: 导入在preprocessor.ipynb中生成的data/question_answer_pares.pkl文件,并将其保存在变量QApares中 with open('./data/question_answer_pares.pkl','rb') as f: QApares = pickle.load(f) # 添加一列展示列表长度 QApares["num"] = [len(QApares.question_after_preprocessing[_]) for _ in range(len(QApares.question_after_preprocessing.values))] # 去除空列表,并重新index QApares = QApares.loc[QApares.num > 0].reset_index(drop=True)
In [4]:
QApares
question | answer | question_after_preprocessing | num | |
---|---|---|---|---|
0 | 买二份有没有少点呀 | 亲亲真的不好意思我们已经是优惠价了呢小本生意请亲谅解 | [买, 二份, 有没有, 少点] | 4 |
1 | 那就等你们处理喽 | 好的亲退了 | [处理] | 1 |
2 | 那我不喜欢 | 颜色的话一般茶刀茶针和二合一的话都是红木檀和黑木檀哦 | [喜欢] | 1 |
3 | 不是免运费 | 本店茶具订单满99包邮除宁夏青海内蒙古海南新疆西藏满39包邮 | [免, 运费] | 2 |
4 | 好吃吗 | 好吃的 | [好吃] | 1 |
... | ... | ... | ... | ... |
86902 | 哪个比较快 | 一般都差不多的哦亲爱哒客官小店是从浙江嘉兴发货的哦一般发货后35天就能到您那边呢请您耐心等下哦 | [比较, 快] | 2 |
86903 | 已经提交申请了谢谢 | 好的亲稍等 | [已经, 提交, 申请, 谢谢] | 4 |
86904 | 就一些浮油你还想一张有一层油啊 | 明天我给主管看下吧 | [想, 一张, 一层, 油] | 4 |
86905 | 他说丟了 | 好的 | [说, 丟了] | 2 |
86906 | 辽宁营口 | 4袋包邮哦 | [辽宁, 营口] | 2 |
86907 rows × 4 columns
In [5]:
# 导入在Retrieve.ipynb中生成的data/retrieve/invertedList.pkl倒排表文件,并将其保存在变量invertedList中 with open('./data/retrieve/invertedList.pkl','rb') as f: invertedList = pickle.load(f)
In [6]:
#这一格的内容是从preprocessor.ipynb中粘贴而来,包含了数据预处理的几个关键函数 import pickle import emoji import re import jieba def clean(content): content = emoji.demojize(content) content = re.sub('<.*>','',content) return content #这一函数是用于对句子进行分词,在preprocessor.ipynb中由于数据是已经分好词的,所以我们并没有进行这一步骤,但是对于一个新的问句,这一步是必不可少的 def question_cut(content): return list(jieba.cut(content)) def strip(wordList): return [word.strip() for word in wordList if word.strip()!=''] with open("data/stopWord.json","r", encoding="utf-8") as f: stopWords = f.read().split("\n") def rm_stop_word(wordList): return [word for word in wordList if word not in stopWords] def get_retrieve_result(sentence): ''' 输入一个句子sentence,根据倒排表进行快速检索,返回与该句子较相近的一些候选问题的index 候选问题由包含该句子中任一单词或包含与该句子中任一单词意思相近的单词的问题索引组成 ''' sentence = clean(sentence) sentence = question_cut(sentence) sentence = strip(sentence) sentence = rm_stop_word(sentence) candidate = set() for word in sentence: if word in invertedList: candidate = candidate | invertedList[word] return candidate
In [7]:
# 加载训练好的fasttext模型用于意图识别 intention = fasttext.load_model('model/fasttext.ftz') def get_intention_result(sentence): ''' 输入句子,返回意图识别结果 入参: sentence:输入的句子 出参: fasttext_label:fasttext模型的输出,共有两种结果:__label__0和__label__1。__label__0表示闲聊型,__label__1表示任务型 ''' fasttext_label = intention.predict(sentence)[0][0] return fasttext_label
Out [7]:
Warning : `load_model` does not return WordVectorModel or SupervisedModel any more, but a `FastText` object which is very similar.
In [8]:
from transformers import BertTokenizer, BertModel import torch tokenizer = BertTokenizer.from_pretrained("./chinese_wwm_pytorch/") model = BertModel.from_pretrained("./chinese_wwm_pytorch/").to("cuda") def get_bert_embedding(sentence): inputs = tokenizer(sentence, return_tensors="pt").to("cuda") outputs = model(**inputs) outputs = outputs.last_hidden_state.mean(1) return outputs
Out [8]:
Some weights of the model checkpoint at ./chinese_wwm_pytorch/ were not used when initializing BertModel: ['cls.predictions.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.bias'] - This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
In [9]:
def get_best_answer(sentence, candidate): """ sentence: 用户输入query, 已经处理好的 candidate: 通过倒排表返回的候选问题的下标列表 返回:最佳回复,string形式 """ cosin_li = [] sentence_ = get_bert_embedding(sentence) for each in tqdm(candidate): each_ = get_bert_embedding(" ".join(QApares["question_after_preprocessing"][each])) cosin_li.append(torch.nn.functional.cosine_similarity(each_, sentence_).to("cpu").detach().numpy()[0]) max_index = np.array(cosin_li).argmax() return QApares["answer"][max_index]
In [20]:
def QA(sentence): ''' 实现一个智能客服系统,输入一个句子sentence,返回一个回答 ''' # 若意图识别结果为闲聊型,则默认返回'闲聊机器人' if get_intention_result(sentence)=='__label__0': return '闲聊机器人' # 根据倒排表进行检索获得候选问题集 candidate = get_retrieve_result(sentence) # 若候选问题集大小为0,默认返回'我不明白你在说什么' if len(candidate)==0: return '我不明白你在说什么' return get_best_answer(sentence, candidate)
In [11]:
# 测试 QA('发什么快递')
Out [11]:
Building prefix dict from the default dictionary ... Loading model from cache C:\Users\avaws\AppData\Local\Temp\jieba.cache Loading model cost 0.638 seconds. Prefix dict has been built successfully.
In [18]:
# 测试 QA('什么时候发货呀?')
Out [18]:
什么时候发货呀?
In [21]:
# 测试 QA('最快什么时候可以发货呢,亲??')
Out [21]:
100%|██████████████████████████████████████████████████████████████████████████████| 9382/9382 [02:00<00:00, 77.98it/s]
In [14]:
# 测试 QA('一二三四五六七')
Out [14]:
'闲聊机器人'
In [15]:
# 本来打算把所有问题的bert向量都先直接算出来的,但是显卡还是太小了,batchsize不能太大 # 那还不如直接放弃这个做法 # from transformers import BertTokenizer, BertModel # from torch.utils.data import DataLoader, Dataset # from torch.nn import Module # class MyDataset(Dataset): # def __init__(self, df, tokenizer): # super().__init__() # self.data = df["question_after_preprocessing"] # self.len = len(df.values) # def __getitem__(self, index): # token = tokenizer(" ".join(self.data[index]), return_tensors="pt", max_length = 64, padding="max_length").to(DEVICE) # token['input_ids'].squeeze_() # return token # def __len__(self): # return self.len # class MyModel(Module): # def __init__(self): # super().__init__() # self.bert = BertModel.from_pretrained("./chinese_wwm_pytorch/") # def forward(self, x): # print(f"x {x}") # out = self.bert(**x) # print(f"out {out}") # out = out.last_hidden_state.mean(1) # return out # BATCHSIZE = 32 # DEVICE = "cuda" # tokenizer = BertTokenizer.from_pretrained("./chinese_wwm_pytorch/") # myds = MyDataset(QApares, tokenizer) # mydl = DataLoader(dataset=myds, batch_size=BATCHSIZE, shuffle=False) # model = MyModel().to(DEVICE)