Commit 7b4f2d50 by 20220116006

new file: …

	new file:   "project3-medcrf/\345\237\272\344\272\216CRF\347\232\204\345\214\273\347\226\227\345\256\236\344\275\223\350\257\206\345\210\253-\345\255\246\347\224\237.ipynb"
parent 61c20263
{
{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"# please run pip install sklearn-crfsuite in your environment"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"#导入数据分析所需的基础包\n",
"import pandas as pd\n",
"import numpy as np\n",
"import os\n",
"\n",
"#包tqdm是用来对可迭代对象执行时生成一个进度条用以监视程序运行过程\n",
"from tqdm import tqdm\n",
"\n",
"#导入训练集测试集划分的包\n",
"from sklearn.model_selection import train_test_split\n",
"\n",
"#导入CRF模型所需的包\n",
"from sklearn_crfsuite import CRF\n",
"\n",
"#导入模型评估所需的包\n",
"from sklearn_crfsuite import metrics"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 读取数据"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"数据保存在data/目录下,data/目录下共有四个文件夹,分别对应四种医学情景:出院情况、病史特点、诊疗过程和一般项目。每个文件夹下保存了该情景下的电子病历。包括两类文件:'xxx-yyy.txtoriginal.txt'和'xxx-yyy.txt'。'xxx-yyy.txtoriginal.txt'保存了xxx情境下第yyy号病历的病历文本,保存在txt的第一行中。'xxx-yyy.txt'为其对应的标签数据。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"数据中共包含5种实体:治疗、身体部位、疾病和诊断、症状和体征、检查和检验。"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"1、患者缘于1小时前被他人打伤头面部,鼻背部,左胸部,伤后头晕头痛,心慌气短,鼻背部疼痛,左胸部疼痛,双膝部右手右肘疼痛,自来我院。血压130/80mmHg。神清语利,扶入诊室,查体合作。头枕部约4.0厘米5.0厘米软组织红肿,触痛。鼻背部肿胀明显,触痛。左胸部可见约3.0厘米3.0厘米软组织红肿,触痛。双膝关节活动受限,双膝、右手背散在挫伤痕。右肘部可见皮肤挫伤痕。神经系统查体未见异常。辅助检查:头颅、鼻骨、全腹部CT:未见明显异常;肋骨三维重建:左侧第7肋骨骨折。右手正斜位X光:未见明显异常;右肘、双膝正侧位X光:未见明显异常;下颌骨X光:未见异常。\t主因头面部鼻背部,左胸部外伤1小时来院。入院查体:血压130/80mmHg。神清语利,扶入诊室,查体合作。头枕部约4.0厘米5.0厘米软组织红肿,触痛。鼻背部肿胀明显,触痛。左胸部可见约3.0厘米3.0厘米软组织红肿,触痛。双膝关节活动受限,双膝、右手背散在挫伤痕。右肘部可见皮肤挫伤痕。神经系统查体未见异常。辅助检查:头颅、鼻骨、全腹部CT:未见明显异常;肋骨三维重建:左侧第7肋骨骨折。右手正斜位X光:未见明显异常;右肘、双膝正侧位X光:未见明显异常;下颌骨X光:未见异常。\n"
]
}
],
"source": [
"#读取一个病历文本数据,并查看其内容。\n",
"with open('data/病史特点/病史特点-39.txtoriginal.txt','r', encoding='utf-8') as f:\n",
" content = f.read().strip()\n",
"print(content)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"背部\t0\t1\t身体部位\n",
"左胸部\t3\t5\t身体部位\n"
]
}
],
"source": [
"#读取上述病历对应的标签数据\n",
"with open('data/一般项目/一般项目-39.txt','r', encoding='utf-8') as f:\n",
" content_label = f.read().strip()\n",
"print(content_label)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"可以看出,标签文件的数据格式为每行对应一个实体,每行格式为“实体内容 实体在文本中的开始位置 实体在文本中的结束位置 实体类别”。如第一行表示content[21:24]对应的便是'右髋部',为身体部位实体类别。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 数据标注"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"实体识别的数据标注方式主要有BIOES和BIO两种,详细的介绍参考实验手册。这里为使标注类别不至于太多,我们采用BIO方式。即将实体部分的第一个汉字标注为B,实体的其他部分的汉字标注为I,非实体部分标注为O。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"将5种实体类别治疗、身体部位、疾病和诊断、症状和体征、检查和检验分别标记为TREATMENT、BODY、DISEASES、SIGNS、EXAMINATIONS。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"则标记时,如:若为治疗类别的实体的第一个汉字,则将其标注为B-TREATMENT,该实体其他字标记为I-TREATMENT。"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"label_dict = {'治疗':'TREATMENT',\n",
" '身体部位':'BODY',\n",
" '疾病和诊断':'DISEASES',\n",
" '症状和体征':'SIGNS',\n",
" '检查和检验':'EXAMINATIONS'}\n",
"\n",
"def sentence2BIOlabel(sentence,label_from_file):\n",
" '''\n",
" 返回句子sentence的BIO标注列表\n",
" 入参:\n",
" sentence:一个句子,字符串类别\n",
" label_from_file:该句子对应的标签,格式为直接从txt文件中读出的格式,形如上文中的content_label\n",
" 出参:\n",
" sentence_label:该句子的BIO标签。一个列表,列表的第i项为第i个汉字对应的标签\n",
" '''\n",
" #初始的sentence_label每个标签均定义为'O'。之后会修改其中实体部分的标签。\n",
" sentence_label = ['O']*len(sentence)\n",
" if label_from_file=='':\n",
" return sentence_label\n",
" #line为label_from_file中每一行的数据,对应一个实体的信息。格式为“实体内容 实体在文本中的开始位置 实体在文本中的结束位置 实体类别”\n",
" for line in label_from_file.split('\\n'):\n",
" #entity_info中保存了单个实体的信息\n",
" entity_info = line.strip().split('\\t')\n",
" start_index = int(entity_info[1]) #实体在文本中的开始位置\n",
" end_index = int(entity_info[2]) #实体在文本中的结束位置\n",
" entity_label = label_dict[entity_info[3]] #实体标签类别\n",
" #为实体的第一个汉字标记为B-xx\n",
" sentence_label[start_index] = 'B-'+entity_label\n",
" #为实体中的其他汉字标记为I-xx\n",
" for i in range(start_index+1,end_index+1):\n",
" sentence_label[i] = 'I-'+entity_label\n",
" return sentence_label"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"['B-BODY', 'I-BODY', 'O', 'B-BODY', 'I-BODY', 'I-BODY', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O']\n"
]
}
],
"source": [
"#以上文中的content和content_label为例查看sentence2BIOlabel函数的使用方法\n",
"#返回上文中content对应的BIO标签并输出\n",
"sentence_label_tmp = sentence2BIOlabel(content,content_label)\n",
"print(sentence_label_tmp)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"scrolled": true,
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"1 B-BODY\n",
"、 I-BODY\n",
"患 O\n",
"者 B-BODY\n",
"缘 I-BODY\n",
"于 I-BODY\n",
"1 O\n",
"小 O\n",
"时 O\n",
"前 O\n",
"被 O\n",
"他 O\n",
"人 O\n",
"打 O\n",
"伤 O\n",
"头 O\n",
"面 O\n",
"部 O\n",
", O\n",
"鼻 O\n",
"背 O\n",
"部 O\n",
", O\n",
"左 O\n",
"胸 O\n",
"部 O\n",
", O\n",
"伤 O\n",
"后 O\n",
"头 O\n",
"晕 O\n",
"头 O\n",
"痛 O\n",
", O\n",
"心 O\n",
"慌 O\n",
"气 O\n",
"短 O\n",
", O\n",
"鼻 O\n",
"背 O\n",
"部 O\n",
"疼 O\n",
"痛 O\n",
", O\n",
"左 O\n",
"胸 O\n",
"部 O\n",
"疼 O\n",
"痛 O\n",
", O\n",
"双 O\n",
"膝 O\n",
"部 O\n",
"右 O\n",
"手 O\n",
"右 O\n",
"肘 O\n",
"疼 O\n",
"痛 O\n",
", O\n",
"自 O\n",
"来 O\n",
"我 O\n",
"院 O\n",
"。 O\n",
"血 O\n",
"压 O\n",
"1 O\n",
"3 O\n",
"0 O\n",
"/ O\n",
"8 O\n",
"0 O\n",
"m O\n",
"m O\n",
"H O\n",
"g O\n",
"。 O\n",
"神 O\n",
"清 O\n",
"语 O\n",
"利 O\n",
", O\n",
"扶 O\n",
"入 O\n",
"诊 O\n",
"室 O\n",
", O\n",
"查 O\n",
"体 O\n",
"合 O\n",
"作 O\n",
"。 O\n",
"头 O\n",
"枕 O\n",
"部 O\n",
"约 O\n",
"4 O\n",
". O\n",
"0 O\n",
"厘 O\n",
"米 O\n",
"5 O\n",
". O\n",
"0 O\n",
"厘 O\n",
"米 O\n",
"软 O\n",
"组 O\n",
"织 O\n",
"红 O\n",
"肿 O\n",
", O\n",
"触 O\n",
"痛 O\n",
"。 O\n",
"鼻 O\n",
"背 O\n",
"部 O\n",
"肿 O\n",
"胀 O\n",
"明 O\n",
"显 O\n",
", O\n",
"触 O\n",
"痛 O\n",
"。 O\n",
"左 O\n",
"胸 O\n",
"部 O\n",
"可 O\n",
"见 O\n",
"约 O\n",
"3 O\n",
". O\n",
"0 O\n",
"厘 O\n",
"米 O\n",
"3 O\n",
". O\n",
"0 O\n",
"厘 O\n",
"米 O\n",
"软 O\n",
"组 O\n",
"织 O\n",
"红 O\n",
"肿 O\n",
", O\n",
"触 O\n",
"痛 O\n",
"。 O\n",
"双 O\n",
"膝 O\n",
"关 O\n",
"节 O\n",
"活 O\n",
"动 O\n",
"受 O\n",
"限 O\n",
", O\n",
"双 O\n",
"膝 O\n",
"、 O\n",
"右 O\n",
"手 O\n",
"背 O\n",
"散 O\n",
"在 O\n",
"挫 O\n",
"伤 O\n",
"痕 O\n",
"。 O\n",
"右 O\n",
"肘 O\n",
"部 O\n",
"可 O\n",
"见 O\n",
"皮 O\n",
"肤 O\n",
"挫 O\n",
"伤 O\n",
"痕 O\n",
"。 O\n",
"神 O\n",
"经 O\n",
"系 O\n",
"统 O\n",
"查 O\n",
"体 O\n",
"未 O\n",
"见 O\n",
"异 O\n",
"常 O\n",
"。 O\n",
"辅 O\n",
"助 O\n",
"检 O\n",
"查 O\n",
": O\n",
"头 O\n",
"颅 O\n",
"、 O\n",
"鼻 O\n",
"骨 O\n",
"、 O\n",
"全 O\n",
"腹 O\n",
"部 O\n",
"C O\n",
"T O\n",
": O\n",
"未 O\n",
"见 O\n",
"明 O\n",
"显 O\n",
"异 O\n",
"常 O\n",
"; O\n",
"肋 O\n",
"骨 O\n",
"三 O\n",
"维 O\n",
"重 O\n",
"建 O\n",
": O\n",
"左 O\n",
"侧 O\n",
"第 O\n",
"7 O\n",
"肋 O\n",
"骨 O\n",
"骨 O\n",
"折 O\n",
"。 O\n",
"右 O\n",
"手 O\n",
"正 O\n",
"斜 O\n",
"位 O\n",
"X O\n",
"光 O\n",
": O\n",
"未 O\n",
"见 O\n",
"明 O\n",
"显 O\n",
"异 O\n",
"常 O\n",
"; O\n",
"右 O\n",
"肘 O\n",
"、 O\n",
"双 O\n",
"膝 O\n",
"正 O\n",
"侧 O\n",
"位 O\n",
"X O\n",
"光 O\n",
": O\n",
"未 O\n",
"见 O\n",
"明 O\n",
"显 O\n",
"异 O\n",
"常 O\n",
"; O\n",
"下 O\n",
"颌 O\n",
"骨 O\n",
"X O\n",
"光 O\n",
": O\n",
"未 O\n",
"见 O\n",
"异 O\n",
"常 O\n",
"。 O\n",
"\t O\n",
"主 O\n",
"因 O\n",
"头 O\n",
"面 O\n",
"部 O\n",
"鼻 O\n",
"背 O\n",
"部 O\n",
", O\n",
"左 O\n",
"胸 O\n",
"部 O\n",
"外 O\n",
"伤 O\n",
"1 O\n",
"小 O\n",
"时 O\n",
"来 O\n",
"院 O\n",
"。 O\n",
"入 O\n",
"院 O\n",
"查 O\n",
"体 O\n",
": O\n",
"血 O\n",
"压 O\n",
"1 O\n",
"3 O\n",
"0 O\n",
"/ O\n",
"8 O\n",
"0 O\n",
"m O\n",
"m O\n",
"H O\n",
"g O\n",
"。 O\n",
"神 O\n",
"清 O\n",
"语 O\n",
"利 O\n",
", O\n",
"扶 O\n",
"入 O\n",
"诊 O\n",
"室 O\n",
", O\n",
"查 O\n",
"体 O\n",
"合 O\n",
"作 O\n",
"。 O\n",
"头 O\n",
"枕 O\n",
"部 O\n",
"约 O\n",
"4 O\n",
". O\n",
"0 O\n",
"厘 O\n",
"米 O\n",
"5 O\n",
". O\n",
"0 O\n",
"厘 O\n",
"米 O\n",
"软 O\n",
"组 O\n",
"织 O\n",
"红 O\n",
"肿 O\n",
", O\n",
"触 O\n",
"痛 O\n",
"。 O\n",
"鼻 O\n",
"背 O\n",
"部 O\n",
"肿 O\n",
"胀 O\n",
"明 O\n",
"显 O\n",
", O\n",
"触 O\n",
"痛 O\n",
"。 O\n",
"左 O\n",
"胸 O\n",
"部 O\n",
"可 O\n",
"见 O\n",
"约 O\n",
"3 O\n",
". O\n",
"0 O\n",
"厘 O\n",
"米 O\n",
"3 O\n",
". O\n",
"0 O\n",
"厘 O\n",
"米 O\n",
"软 O\n",
"组 O\n",
"织 O\n",
"红 O\n",
"肿 O\n",
", O\n",
"触 O\n",
"痛 O\n",
"。 O\n",
"双 O\n",
"膝 O\n",
"关 O\n",
"节 O\n",
"活 O\n",
"动 O\n",
"受 O\n",
"限 O\n",
", O\n",
"双 O\n",
"膝 O\n",
"、 O\n",
"右 O\n",
"手 O\n",
"背 O\n",
"散 O\n",
"在 O\n",
"挫 O\n",
"伤 O\n",
"痕 O\n",
"。 O\n",
"右 O\n",
"肘 O\n",
"部 O\n",
"可 O\n",
"见 O\n",
"皮 O\n",
"肤 O\n",
"挫 O\n",
"伤 O\n",
"痕 O\n",
"。 O\n",
"神 O\n",
"经 O\n",
"系 O\n",
"统 O\n",
"查 O\n",
"体 O\n",
"未 O\n",
"见 O\n",
"异 O\n",
"常 O\n",
"。 O\n",
"辅 O\n",
"助 O\n",
"检 O\n",
"查 O\n",
": O\n",
"头 O\n",
"颅 O\n",
"、 O\n",
"鼻 O\n",
"骨 O\n",
"、 O\n",
"全 O\n",
"腹 O\n",
"部 O\n",
"C O\n",
"T O\n",
": O\n",
"未 O\n",
"见 O\n",
"明 O\n",
"显 O\n",
"异 O\n",
"常 O\n",
"; O\n",
"肋 O\n",
"骨 O\n",
"三 O\n",
"维 O\n",
"重 O\n",
"建 O\n",
": O\n",
"左 O\n",
"侧 O\n",
"第 O\n",
"7 O\n",
"肋 O\n",
"骨 O\n",
"骨 O\n",
"折 O\n",
"。 O\n",
"右 O\n",
"手 O\n",
"正 O\n",
"斜 O\n",
"位 O\n",
"X O\n",
"光 O\n",
": O\n",
"未 O\n",
"见 O\n",
"明 O\n",
"显 O\n",
"异 O\n",
"常 O\n",
"; O\n",
"右 O\n",
"肘 O\n",
"、 O\n",
"双 O\n",
"膝 O\n",
"正 O\n",
"侧 O\n",
"位 O\n",
"X O\n",
"光 O\n",
": O\n",
"未 O\n",
"见 O\n",
"明 O\n",
"显 O\n",
"异 O\n",
"常 O\n",
"; O\n",
"下 O\n",
"颌 O\n",
"骨 O\n",
"X O\n",
"光 O\n",
": O\n",
"未 O\n",
"见 O\n",
"异 O\n",
"常 O\n",
"。 O\n"
]
}
],
"source": [
"#输出content中每个汉字与BIO标签的对应关系\n",
"for i in range(len(content)):\n",
" print(content[i],sentence_label_tmp[i])"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"scrolled": true,
"tags": []
},
"outputs": [],
"source": [
"#将数据集中每个样本读进来并将其保存在sentence_list中,将每个样本对应的BIO标签保存在label_list中\n",
"#sentence_list的格式为[第一个句子,第二个句子,第三个句子,...,第n个句子]\n",
"#label_list的格式为[第一个句子对应的BIO标注列表,第二个句子对应的BIO标注列表,第三个句子对应的BIO标注列表,...,第n个句子对应的BIO标注列表]\n",
"\n",
"##在这里输入你的代码\n",
"import os\n",
"sentence_list = []\n",
"label_list = []\n",
"text = \"\"\n",
"for folder in [\"病史特点\", \"出院情况\", \"一般项目\", \"诊疗过程\"]:\n",
" for i in range(1, 1000):\n",
" try:\n",
" with open(f\"./data/{folder}/{folder}-{i}.txt\",'r', encoding='utf-8') as f:\n",
" label = f.read().strip()\n",
" with open(f\"./data/{folder}/{folder}-{i}.txtoriginal.txt\",'r', encoding='utf-8') as f:\n",
" text = f.read().strip()\n",
" sentence_label_tmp = sentence2BIOlabel(text,label)\n",
" text = [_ for _ in text]\n",
" assert len(sentence_label_tmp) == len(text)\n",
" sentence_list.append(\"\".join(text))\n",
" label_list.append(sentence_label_tmp)\n",
" except IndexError and FileNotFoundError:\n",
" pass\n",
"\n",
"\n",
"##结束你的代码"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 文本特征工程"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"要使用CRF算法对每个字进行标注,就需要获取每个字对应的特征。就需要对文本进行特征工程,这一部分就是构建一句话中每个字的特征。"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"#中文分词时最常用的包是jieba。但我们本次的数据集是专门针对医疗领域的,这里选用了一个在细分领域上表现更好的库pkuseg\n",
"import pkuseg\n",
"#将model_name设置为'medicine'以加载医疗领域的模型。第一次执行此代码时会自动下载医疗领域对应的模型,这可能需要一些时间。\n",
"#设置postag为True会在分词的同时进行词性标注\n",
"seg = pkuseg.pkuseg(model_name='medicine',postag=True)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[('发病', 'vn'),\n",
" ('原因', 'n'),\n",
" ('为', 'v'),\n",
" ('右髋部', 'n'),\n",
" ('摔伤', 'v'),\n",
" ('后', 'f'),\n",
" ('疼痛', 'a'),\n",
" ('肿胀', 'v')]"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"#pkuseg包使用示例\n",
"seg.cut('发病原因为右髋部摔伤后疼痛肿胀')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"可以看出,pkuseg包对医疗方面的文本有较好的分词效果。seg.cut(文本)的输出格式为[(第一个词,第一个词的词性),(第二个词,第二个词的词性),...,(第n个词,第n个词的词性)]。稍后在构建每个字的特征时我们会用到pkuseg的分词功能。"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"#加载医学的专业词汇词库THUOCL_medical.txt。这一文件是从https://github.com/thunlp/THUOCL中下载而来。\n",
"#文件中每行的格式为:医学名词 词频\n",
"\n",
"#读取文件\n",
"with open('THUOCL_medical.txt', 'r', encoding='utf-8') as f:\n",
" medical_words = f.read().strip()\n",
"#获取医疗词汇表\n",
"medical_words_list = [words.strip().split('\\t')[0] for words in medical_words.split('\\n')]"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['精神', '医院', '检查', '死亡', '恢复', '意识', '医疗', '治疗', '卫生', '患者']"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"#医疗词汇表示例,这一词汇表在我们构建特征时会用到。\n",
"medical_words_list[:10]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"进行完上述准备工作后,我们接下来正式来构造特征。"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [],
"source": [
"def word2feature(sentence,i):\n",
" '''\n",
" 返回句子sentence中第i个汉字的一些简单的特征\n",
" 入参:\n",
" sentence:待处理的句子\n",
" i:会返回第i个汉字的一些简单的特征\n",
" 出参:\n",
" simple_feature:由一些简单的特征所组成的字典,字典的键为特征名,值为特征值\n",
" '''\n",
" simple_feature = {}\n",
" simple_feature['word'] = sentence[i] #当前字\n",
" simple_feature['pre_word'] = sentence[i-1] if i>0 else 'start' #前一个字\n",
" simple_feature['after_word'] = sentence[i+1] if i<len(sentence)-1 else 'end' #后一个字\n",
" \n",
" #接下来加入当前字的Bi-gram特征,即前一个字+当前字、当前字+后一个字,并将特征分别命名为'pre_word_word'和'word_after_word'\n",
" ##在这里输入你的代码\n",
" simple_feature['pre_word_word'] = sentence[i-1: i+1] if i > 0 else sentence[i]\n",
" simple_feature['word_after_word'] = sentence[i: i+2] if i < len(sentence)-1 else sentence[i]\n",
" ##结束你的代码\n",
" \n",
" #加入一个偏置项\n",
" simple_feature['bias'] = 1\n",
" return simple_feature"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [],
"source": [
"def sentence2feature(sentence):\n",
" '''\n",
" 在word2feature定义的简单特征的基础上,增加一些复杂的特征,并返回句子中每个字对应的特征字典所组成的列表\n",
" 入参:\n",
" sentence:待处理的句子\n",
" 出参:\n",
" sentence_feature_list:句子中每个字对应的特征字典所组成的列表。格式为:[第一个字的特征字典,第二个字的特征字典,...,第n个字的特征字典]\n",
" '''\n",
" sentence_feature_list = [word2feature(sentence,i) for i in range(len(sentence))]\n",
" #为每个字增加一些复杂的特征\n",
" #增加当前字在分词后所在的词,该词的词性,该词的上一个词,该词的下一个词,该词是否为医疗专业词汇,该字是否为该词的第一个字\n",
" word_index = 0 #指向字的指针,会逐步往后移动,其作用在之后可以看到\n",
" #使用pkuseg对句子进行分词\n",
" sentence_cut = seg.cut(sentence)\n",
" #这里为和字进行区分,使用大写的WORD来表示词,小写的word来表示字\n",
" for i,(WORD,nominal) in enumerate(sentence_cut):\n",
" # print(f\"WORD: {WORD}\")\n",
" # print(f\"nominal: {nominal}\")\n",
" for j in range(word_index,word_index+len(WORD)):\n",
" # print(f\"sentence_feature_list[{j}]: {sentence_feature_list[j]}\")\n",
" sentence_feature_list[j]['WORD'] = WORD #当前字在分词后所在的词\n",
" sentence_feature_list[j]['nominal'] = nominal #该词的词性\n",
" sentence_feature_list[j]['pre_WORD'] = sentence_cut[i-1][0] if i>0 else 'START' #该词的上一个词\n",
" sentence_feature_list[j]['after_WORD'] = sentence_cut[i+1][0] if i<len(sentence_cut)-1 else 'END' #该词的下一个词\n",
" sentence_feature_list[j]['is_medicalwords'] = 1 if WORD in medical_words_list else 0 #该词是否为医学专业词汇\n",
"\n",
" #加入一个特征'is_first'表示当前字是否为其所属词的第一个字,是则将该特征值记为1,否则记为0\n",
" ##在这里输入你的代码\n",
" sentence_feature_list[j]['is_first'] = 1 if sentence_feature_list[j][\"word\"] == WORD[0] else 0\n",
" ##结束你的代码\n",
"\n",
" word_index = word_index+len(WORD) #更新word_index的值\n",
" return sentence_feature_list"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1198/1198 [00:58<00:00, 20.50it/s]\n"
]
}
],
"source": [
"#获取sentence_list中每句话中每个字对应的特征,并将其保存在feature_list中\n",
"#使用tqdm函数来输出一个进度条,以监控代码的运行过程\n",
"feature_list = [sentence2feature(sentence) for sentence in tqdm(sentence_list)]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# CRF模型搭建"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [],
"source": [
"#首先对数据划分训练集和测试集\n",
"x_train,x_test,y_train,y_test = train_test_split(feature_list, label_list, test_size=0.3, random_state=2020)"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"C:\\Users\\avaws\\anaconda3\\envs\\pkuseg\\lib\\site-packages\\sklearn\\base.py:213: FutureWarning: From version 0.24, get_params will raise an AttributeError if a parameter cannot be retrieved as an instance attribute. Previously it would return None.\n",
" FutureWarning)\n"
]
},
{
"data": {
"text/plain": [
"CRF(algorithm='lbfgs', all_possible_transitions=False, c1=0.1, c2=0.1,\n",
" keep_tempfiles=None, max_iterations=100)"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"#搭建一个CRF模型\n",
"crf = CRF(\n",
" algorithm='lbfgs', #训练算法\n",
" c1=0.1, #L1正则化系数\n",
" c2=0.1, #L2正则化系数\n",
" max_iterations=100, #优化算法的最大迭代次数\n",
" all_possible_transitions=False\n",
")\n",
"#使用crf模型对训练集进行训练\n",
"crf.fit(x_train,y_train)"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [],
"source": [
"def predict(sentence):\n",
" '''\n",
" 输出CRF预测的一个句子的BIO标注\n",
" 入参:\n",
" sentence:待处理的句子\n",
" 出参:\n",
" sent_bio:一个字典,字典的键为句子中的汉字,值为其对应的BIO标注\n",
" '''\n",
" #提示:按照获取输入句子的特征、获取crf的预测值、获取以句子中汉字为键,对应的BIO标注为值的字典 的步骤进行\n",
" #提示:crf.predict_single(待预测句子的特征) 可用来预测单个句子的BIO标注,返回值为每个字的BIO标注列表\n",
" ##在这里输入你的代码\n",
" sent_bio = crf.predict_single(sentence2feature(sentence))\n",
" ##结束你的代码\n",
" \n",
" return sent_bio"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['O', 'O', 'O', 'O', 'B-BODY', 'I-BODY', 'O', 'O', 'O', 'O', 'O']"
]
},
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"#使用predict函数对一个句子进行预测\n",
"predict('这是由于耳膜损伤导致的')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"可以看出CRF模型能够有效的识别出这句话中的实体,接下来我们用CRF模型对我们的测试集进行预测。"
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {},
"outputs": [],
"source": [
"#获取测试集的预测值\n",
"y_pred = crf.predict(x_test)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 模型评估"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"使用sklearn_crfsuite中自带的metrics包可对模型进行有效的评估"
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['B-TREATMENT',\n",
" 'I-TREATMENT',\n",
" 'B-BODY',\n",
" 'I-BODY',\n",
" 'B-SIGNS',\n",
" 'I-SIGNS',\n",
" 'B-EXAMINATIONS',\n",
" 'I-EXAMINATIONS',\n",
" 'B-DISEASES',\n",
" 'I-DISEASES']"
]
},
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"#获取crf模型的全部标签\n",
"labels = list(crf.classes_)\n",
"#由于标签O过多,而我们对其他标签更感兴趣。为了解决这个问题,我们标签O移除。\n",
"labels.remove('O')\n",
"#查看除'O'外的全部标签\n",
"labels"
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"0.937281513469052"
]
},
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"#计算除O之外的所有标签计算的平均F1分数。\n",
"metrics.flat_f1_score(y_test, y_pred,\n",
" average='micro', labels=labels)"
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"C:\\Users\\avaws\\anaconda3\\envs\\pkuseg\\lib\\site-packages\\sklearn\\utils\\validation.py:71: FutureWarning: Pass labels=['B-TREATMENT', 'I-TREATMENT', 'B-BODY', 'I-BODY', 'B-SIGNS', 'I-SIGNS', 'B-EXAMINATIONS', 'I-EXAMINATIONS', 'B-DISEASES', 'I-DISEASES'] as keyword args. From version 0.25 passing these as positional arguments will result in an error\n",
" FutureWarning)\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
" precision recall f1-score support\n",
"\n",
" B-TREATMENT 0.881 0.804 0.840 285\n",
" I-TREATMENT 0.903 0.875 0.889 1346\n",
" B-BODY 0.917 0.924 0.920 3343\n",
" I-BODY 0.904 0.938 0.921 6004\n",
" B-SIGNS 0.974 0.986 0.980 2412\n",
" I-SIGNS 0.976 0.984 0.980 2673\n",
"B-EXAMINATIONS 0.968 0.964 0.966 2952\n",
"I-EXAMINATIONS 0.961 0.945 0.953 6655\n",
" B-DISEASES 0.858 0.744 0.797 227\n",
" I-DISEASES 0.850 0.717 0.777 875\n",
"\n",
" micro avg 0.938 0.936 0.937 26772\n",
" macro avg 0.919 0.888 0.902 26772\n",
" weighted avg 0.938 0.936 0.937 26772\n",
"\n"
]
}
],
"source": [
"#查看每个类别的预测情况\n",
"print(metrics.flat_classification_report(\n",
" y_test, y_pred, labels=labels, digits=3\n",
"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# BiLSTM-CRF"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"CRF模型还可与BiLSTM模型结合来解决实体识别问题,这样的好处是BiLSTM可以自动获取文本的特征,我们便不需要自己去定义特征,不需要再进行文本特征工程部分。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"由于BiLSTM-CRF的代码过于冗长,且实现这一代码并不是我们的重点,而仅做展示之用。所以我们把BiLSTM-CRF模型的实现细节均在BiLSTM_CRF.py中实现。这里仅展示部分关键部分。"
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Collecting package metadata (current_repodata.json): ...working... done\n",
"Solving environment: ...working... done\n",
"\n",
"# All requested packages already installed.\n",
"\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n",
"\n",
"==> WARNING: A newer version of conda exists. <==\n",
" current version: 4.11.0\n",
" latest version: 4.12.0\n",
"\n",
"Please update conda by running\n",
"\n",
" $ conda update -n base -c defaults conda\n",
"\n",
"\n"
]
}
],
"source": [
"! conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch"
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"C:\\Users\\avaws\\anaconda3\\envs\\pkuseg\\lib\\site-packages\\tqdm\\auto.py:22: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
" from .autonotebook import tqdm as notebook_tqdm\n"
]
}
],
"source": [
"from BiLSTM_CRF import *"
]
},
{
"cell_type": "code",
"execution_count": 26,
"metadata": {},
"outputs": [],
"source": [
"#将word映射到id\n",
"word2id = word_to_id(sentence_list)\n",
"#将label映射到id\n",
"tag2id = tag_to_id(label_list)"
]
},
{
"cell_type": "code",
"execution_count": 27,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'O': 0,\n",
" 'B-BODY': 1,\n",
" 'I-BODY': 2,\n",
" 'B-SIGNS': 3,\n",
" 'I-SIGNS': 4,\n",
" 'B-EXAMINATIONS': 5,\n",
" 'I-EXAMINATIONS': 6,\n",
" 'B-DISEASES': 7,\n",
" 'I-DISEASES': 8,\n",
" 'B-TREATMENT': 9,\n",
" 'I-TREATMENT': 10,\n",
" '<unk>': 11,\n",
" '<pad>': 12,\n",
" '<start>': 13,\n",
" '<end>': 14}"
]
},
"execution_count": 27,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"#查看label与id的映射关系\n",
"tag2id"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"LSTM模型训练的时候需要在word2id和tag2id加入PAD和UNK,如果是加了CRF的lstm还要加入<start>和<end> (解码的时候需要用到)。word2id的格式与tag2id的格式类似。"
]
},
{
"cell_type": "code",
"execution_count": 28,
"metadata": {},
"outputs": [],
"source": [
"#按照与CRF模型划分训练集测试集时相同的比例和相同的随机数种子对sentence_list与label_list划分训练集合测试集\n",
"x_train_lstmcrf,x_test_lstmcrf,y_train_lstmcrf,y_test_lstmcrf = train_test_split(sentence_list, label_list, test_size=0.3, random_state=2020)"
]
},
{
"cell_type": "code",
"execution_count": 29,
"metadata": {},
"outputs": [],
"source": [
"#为每句话后加入一个\"<end>\" token\n",
"x_train_lstmcrf,y_train_lstmcrf = prepocess_data_for_lstmcrf(x_train_lstmcrf,y_train_lstmcrf)\n",
"x_test_lstmcrf,y_test_lstmcrf = prepocess_data_for_lstmcrf(x_test_lstmcrf,y_test_lstmcrf,test=True)"
]
},
{
"cell_type": "code",
"execution_count": 30,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Epoch 1, step/total_step: 10/14 71.43% Loss:679.0421\n",
"Epoch 1, Val Loss:301.0062\n",
"Epoch 2, step/total_step: 10/14 71.43% Loss:374.4472\n",
"Epoch 2, Val Loss:230.5597\n",
"Epoch 3, step/total_step: 10/14 71.43% Loss:303.9551\n",
"Epoch 3, Val Loss:205.2301\n",
"Epoch 4, step/total_step: 10/14 71.43% Loss:269.2586\n",
"Epoch 4, Val Loss:172.2326\n",
"Epoch 5, step/total_step: 10/14 71.43% Loss:224.5888\n",
"Epoch 5, Val Loss:140.8520\n",
"Epoch 6, step/total_step: 10/14 71.43% Loss:183.4506\n",
"Epoch 6, Val Loss:116.7093\n",
"Epoch 7, step/total_step: 10/14 71.43% Loss:152.1754\n",
"Epoch 7, Val Loss:105.2365\n",
"Epoch 8, step/total_step: 10/14 71.43% Loss:133.0295\n",
"Epoch 8, Val Loss:85.2563\n",
"Epoch 9, step/total_step: 10/14 71.43% Loss:109.7353\n",
"Epoch 9, Val Loss:72.5094\n",
"Epoch 10, step/total_step: 10/14 71.43% Loss:93.3250\n",
"Epoch 10, Val Loss:63.0916\n",
"Epoch 11, step/total_step: 10/14 71.43% Loss:81.4246\n",
"Epoch 11, Val Loss:56.4978\n",
"Epoch 12, step/total_step: 10/14 71.43% Loss:72.7384\n",
"Epoch 12, Val Loss:50.4667\n",
"Epoch 13, step/total_step: 10/14 71.43% Loss:65.0488\n",
"Epoch 13, Val Loss:45.2442\n",
"Epoch 14, step/total_step: 10/14 71.43% Loss:58.7878\n",
"Epoch 14, Val Loss:41.5584\n",
"Epoch 15, step/total_step: 10/14 71.43% Loss:53.7732\n",
"Epoch 15, Val Loss:37.7015\n",
"Epoch 16, step/total_step: 10/14 71.43% Loss:49.1189\n",
"Epoch 16, Val Loss:34.6484\n",
"Epoch 17, step/total_step: 10/14 71.43% Loss:45.5345\n",
"Epoch 17, Val Loss:32.9121\n",
"Epoch 18, step/total_step: 10/14 71.43% Loss:42.7169\n",
"Epoch 18, Val Loss:30.8214\n",
"Epoch 19, step/total_step: 10/14 71.43% Loss:40.0796\n",
"Epoch 19, Val Loss:28.5754\n",
"Epoch 20, step/total_step: 10/14 71.43% Loss:37.5324\n",
"Epoch 20, Val Loss:27.1824\n",
"Epoch 21, step/total_step: 10/14 71.43% Loss:35.6576\n",
"Epoch 21, Val Loss:25.6989\n",
"Epoch 22, step/total_step: 10/14 71.43% Loss:33.6447\n",
"Epoch 22, Val Loss:24.4878\n",
"Epoch 23, step/total_step: 10/14 71.43% Loss:31.8999\n",
"Epoch 23, Val Loss:22.9804\n",
"Epoch 24, step/total_step: 10/14 71.43% Loss:30.4300\n",
"Epoch 24, Val Loss:22.2672\n",
"Epoch 25, step/total_step: 10/14 71.43% Loss:29.0533\n",
"Epoch 25, Val Loss:20.9311\n",
"Epoch 26, step/total_step: 10/14 71.43% Loss:27.6675\n",
"Epoch 26, Val Loss:20.0889\n",
"Epoch 27, step/total_step: 10/14 71.43% Loss:26.3884\n",
"Epoch 27, Val Loss:19.4638\n",
"Epoch 28, step/total_step: 10/14 71.43% Loss:25.4839\n",
"Epoch 28, Val Loss:18.4383\n",
"Epoch 29, step/total_step: 10/14 71.43% Loss:24.1868\n",
"Epoch 29, Val Loss:18.2865\n",
"Epoch 30, step/total_step: 10/14 71.43% Loss:23.7794\n",
"Epoch 30, Val Loss:17.5620\n"
]
}
],
"source": [
"#搭建一个BiLSTM_CRF模型\n",
"model = BiLSTM_CRF_Model(vocab_size=len(word2id),out_size=len(tag2id),batch_size=64, epochs=30)\n",
"#在训练集上进行训练\n",
"model.train(x_train_lstmcrf,y_train_lstmcrf,word2id,tag2id)"
]
},
{
"cell_type": "code",
"execution_count": 31,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"C:\\Users\\avaws\\anaconda3\\envs\\pkuseg\\lib\\site-packages\\torch\\nn\\modules\\rnn.py:695: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters(). (Triggered internally at ..\\aten\\src\\ATen\\native\\cudnn\\RNN.cpp:925.)\n",
" self.num_layers, self.dropout, self.training, self.bidirectional)\n"
]
}
],
"source": [
"#获取测试集的预测值\n",
"y_pred_lstmcrf, _ = model.test(x_test_lstmcrf,y_test_lstmcrf,word2id,tag2id)"
]
},
{
"cell_type": "code",
"execution_count": 32,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"0.9282667117701228"
]
},
"execution_count": 32,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"#计算BiLSTM-CRF模型除O之外的所有标签计算的平均F1分数。\n",
"metrics.flat_f1_score(y_test_lstmcrf, y_pred_lstmcrf,\n",
" average='micro', labels=labels)"
]
},
{
"cell_type": "code",
"execution_count": 33,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"C:\\Users\\avaws\\anaconda3\\envs\\pkuseg\\lib\\site-packages\\sklearn\\utils\\validation.py:71: FutureWarning: Pass labels=['B-TREATMENT', 'I-TREATMENT', 'B-BODY', 'I-BODY', 'B-SIGNS', 'I-SIGNS', 'B-EXAMINATIONS', 'I-EXAMINATIONS', 'B-DISEASES', 'I-DISEASES'] as keyword args. From version 0.25 passing these as positional arguments will result in an error\n",
" FutureWarning)\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
" precision recall f1-score support\n",
"\n",
" B-TREATMENT 0.901 0.572 0.700 285\n",
" I-TREATMENT 0.951 0.772 0.852 1346\n",
" B-BODY 0.931 0.908 0.919 3343\n",
" I-BODY 0.888 0.953 0.919 6004\n",
" B-SIGNS 0.971 0.959 0.965 2412\n",
" I-SIGNS 0.968 0.964 0.966 2673\n",
"B-EXAMINATIONS 0.952 0.960 0.956 2952\n",
"I-EXAMINATIONS 0.959 0.938 0.948 6655\n",
" B-DISEASES 0.904 0.661 0.763 227\n",
" I-DISEASES 0.815 0.709 0.758 875\n",
"\n",
" micro avg 0.934 0.922 0.928 26772\n",
" macro avg 0.924 0.840 0.875 26772\n",
" weighted avg 0.935 0.922 0.927 26772\n",
"\n"
]
}
],
"source": [
"#查看BiLSTM-CRF模型每个类别的预测情况\n",
"print(metrics.flat_classification_report(\n",
" y_test_lstmcrf, y_pred_lstmcrf, labels=labels, digits=3\n",
"))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.13"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment