Commit ae4b2942 by 20200913012

项目五

parent f9198211
{
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 利用信息抽取技术搭建知识库\n",
"\n",
"在这个notebook文件中,有些模板代码已经提供给你,但你还需要实现更多的功能来完成这个项目。除非有明确要求,你无须修改任何已给出的代码。以**'【练习】'**开始的标题表示接下来的代码部分中有你需要实现的功能。这些部分都配有详细的指导,需要实现的部分也会在注释中以'TODO'标出。请仔细阅读所有的提示。\n",
"\n",
">**提示:**Code 和 Markdown 区域可通过 **Shift + Enter** 快捷键运行。此外,Markdown可以通过双击进入编辑模式。\n",
"\n",
"---\n",
"\n",
"### 让我们开始吧\n",
"\n",
"本项目的目的是结合命名实体识别、依存语法分析、实体消歧、实体统一对网站开放语料抓取的数据建立小型知识图谱。\n",
"\n",
"在现实世界中,你需要拼凑一系列的模型来完成不同的任务;举个例子,用来预测狗种类的算法会与预测人类的算法不同。在做项目的过程中,你可能会遇到不少失败的预测,因为并不存在完美的算法和模型。你最终提交的不完美的解决方案也一定会给你带来一个有趣的学习经验!\n",
"\n",
"\n",
"---\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 步骤 1:实体统一"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"实体统一做的是对同一实体具有多个名称的情况进行统一,将多种称谓统一到一个实体上,并体现在实体的属性中(可以给实体建立“别称”属性)\n",
"\n",
"例如:对“河北银行股份有限公司”、“河北银行公司”和“河北银行”我们都可以认为是一个实体,我们就可以将通过提取前两个称谓的主要内容,得到“河北银行”这个实体关键信息。\n",
"\n",
"公司名称有其特点,例如后缀可以省略、上市公司的地名可以省略等等。在data/dict目录中提供了几个词典,可供实体统一使用。\n",
"- company_suffix.txt是公司的通用后缀词典\n",
"- company_business_scope.txt是公司经营范围常用词典\n",
"- co_Province_Dim.txt是省份词典\n",
"- co_City_Dim.txt是城市词典\n",
"- stopwords.txt是可供参考的停用词\n",
"\n",
"### 练习1:\n",
"编写main_extract函数,实现对实体的名称提取“主体名称”的功能。"
]
},
{
"cell_type": "code",
"execution_count": 109,
"metadata": {},
"outputs": [],
"source": [
"import jieba\n",
"import jieba.posseg as pseg\n",
"import re\n",
"import datetime\n",
"\n",
"\n",
"# 从输入的“公司名”中提取主体\n",
"def main_extract(input_str,stop_word,d_4_delete,d_city_province):\n",
" # 开始分词并处理\n",
" seg = pseg.cut(input_str)\n",
" seg = remove_word(seg,stop_word,d_4_delete)\n",
" seg_lst = city_prov_ahead(seg,d_city_province)\n",
" return seg_lst\n",
"\n",
" \n",
"#TODO:实现公司名称中地名提前\n",
"def city_prov_ahead(seg,d_city_province):\n",
" # print(seg)\n",
" city_prov_lst = []\n",
" seg_lst = []\n",
" # TODO ...\n",
" for s in seg:\n",
" # print(s)\n",
" if s.flag == 'ns' and s.word in d_city_province:\n",
" city_prov_lst.append(s.word)\n",
" else:\n",
" seg_lst.append(s.word)\n",
" return city_prov_lst+seg_lst\n",
"\n",
"\n",
"\n",
"\n",
"#TODO:替换特殊符号\n",
"def remove_word(seg,stop_word,d_4_delete):\n",
" # TODO ...\n",
" seg_lst = [j for j in [i for i in seg if i.word not in stop_word] if j.word not in d_4_delete]\n",
" return seg_lst\n",
"\n",
"\n",
"# 初始化,加载词典\n",
"def my_initial():\n",
" fr1 = open(r\"../data/dict/co_City_Dim.txt\", encoding='utf-8')\n",
" fr2 = open(r\"../data/dict/co_Province_Dim.txt\", encoding='utf-8')\n",
" fr3 = open(r\"../data/dict/company_business_scope.txt\", encoding='utf-8')\n",
" fr4 = open(r\"../data/dict/company_suffix.txt\", encoding='utf-8')\n",
" #城市名\n",
" lines1 = fr1.readlines()\n",
" d_4_delete = []\n",
" d_city_province = [re.sub(r'(\\r|\\n)*','',line) for line in lines1]\n",
" #省份名\n",
" lines2 = fr2.readlines()\n",
" l2_tmp = [re.sub(r'(\\r|\\n)*','',line) for line in lines2]\n",
" d_city_province.extend(l2_tmp)\n",
" #公司后缀\n",
" lines3 = fr3.readlines()\n",
" l3_tmp = [re.sub(r'(\\r|\\n)*','',line) for line in lines3]\n",
" lines4 = fr4.readlines()\n",
" l4_tmp = [re.sub(r'(\\r|\\n)*','',line) for line in lines4]\n",
" d_4_delete.extend(l4_tmp)\n",
" #get stop_word\n",
" fr = open(r'../data/dict/stopwords.txt', encoding='utf-8') \n",
" stop_word = fr.readlines()\n",
" stop_word_after = [re.sub(r'(\\r|\\n)*','',stop_word[i]) for i in range(len(stop_word))]\n",
" stop_word_after[-1] = stop_word[-1]\n",
" stop_word = stop_word_after\n",
" return d_4_delete,stop_word,d_city_province\n"
]
},
{
"cell_type": "code",
"execution_count": 110,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"河北银行\n"
]
}
],
"source": [
"# TODO:测试实体统一用例\n",
"d_4_delete,stop_word,d_city_province = my_initial()\n",
"company_name = \"河北银行股份有限公司\"\n",
"lst = main_extract(company_name,stop_word,d_4_delete,d_city_province)\n",
"company_name = ''.join(lst) # 对公司名提取主体部分,将包含相同主体部分的公司统一为一个实体\n",
"print(company_name)"
]
},
{
"cell_type": "code",
"execution_count": 111,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"金固\n"
]
}
],
"source": [
"input_str = '金固股份'\n",
"lst = main_extract(input_str,stop_word,d_4_delete,d_city_province)\n",
"company_name = ''.join(lst) # 对公司名提取主体部分,将包含相同主体部分的公司统一为一个实体\n",
"print(company_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 步骤 2:实体识别\n",
"有很多开源工具可以帮助我们对实体进行识别。常见的有LTP、StanfordNLP、FoolNLTK等等。\n",
"\n",
"本次采用FoolNLTK实现实体识别,fool是一个基于bi-lstm+CRF算法开发的深度学习开源NLP工具,包括了分词、实体识别等功能,大家可以通过fool很好地体会深度学习在该任务上的优缺点。\n",
"\n",
"在‘data/train_data.csv’和‘data/test_data.csv’中是从网络上爬虫得到的上市公司公告,数据样例如下:"
]
},
{
"cell_type": "code",
"execution_count": 112,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>id</th>\n",
" <th>sentence</th>\n",
" <th>tag</th>\n",
" <th>member1</th>\n",
" <th>member2</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>6461</td>\n",
" <td>与本公司关系:受同一公司控制 2,杭州富生电器有限公司企业类型: 有限公司注册地址: 富阳市...</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>2111</td>\n",
" <td>三、关联交易标的基本情况 1、交易标的基本情况 公司名称:红豆集团财务有限公司 公司地址:无...</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>9603</td>\n",
" <td>2016年协鑫集成科技股份有限公司向瑞峰(张家港)光伏科技有限公司支付设备款人民币4,515...</td>\n",
" <td>1</td>\n",
" <td>协鑫集成科技股份有限公司</td>\n",
" <td>瑞峰(张家港)光伏科技有限公司</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>3456</td>\n",
" <td>证券代码:600777 证券简称:新潮实业 公告编号:2015-091 烟台新潮实业股份有限...</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4</th>\n",
" <td>8844</td>\n",
" <td>本集团及广发证券股份有限公司持有辽宁成大股份有限公司股票的本期变动系买卖一揽子沪深300指数...</td>\n",
" <td>1</td>\n",
" <td>广发证券股份有限公司</td>\n",
" <td>辽宁成大股份有限公司</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" id sentence tag member1 \\\n",
"0 6461 与本公司关系:受同一公司控制 2,杭州富生电器有限公司企业类型: 有限公司注册地址: 富阳市... 0 0 \n",
"1 2111 三、关联交易标的基本情况 1、交易标的基本情况 公司名称:红豆集团财务有限公司 公司地址:无... 0 0 \n",
"2 9603 2016年协鑫集成科技股份有限公司向瑞峰(张家港)光伏科技有限公司支付设备款人民币4,515... 1 协鑫集成科技股份有限公司 \n",
"3 3456 证券代码:600777 证券简称:新潮实业 公告编号:2015-091 烟台新潮实业股份有限... 0 0 \n",
"4 8844 本集团及广发证券股份有限公司持有辽宁成大股份有限公司股票的本期变动系买卖一揽子沪深300指数... 1 广发证券股份有限公司 \n",
"\n",
" member2 \n",
"0 0 \n",
"1 0 \n",
"2 瑞峰(张家港)光伏科技有限公司 \n",
"3 0 \n",
"4 辽宁成大股份有限公司 "
]
},
"execution_count": 112,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import pandas as pd\n",
"train_data = pd.read_csv('../data/info_extract/train_data.csv', encoding = 'gb2312', header=0)\n",
"train_data.head()"
]
},
{
"cell_type": "code",
"execution_count": 113,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>id</th>\n",
" <th>sentence</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>9259</td>\n",
" <td>2015年1月26日,多氟多化工股份有限公司与李云峰先生签署了《附条件生效的股份认购合同》</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>9136</td>\n",
" <td>2、2016年2月5日,深圳市新纶科技股份有限公司与侯毅先</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>220</td>\n",
" <td>2015年10月26日,山东华鹏玻璃股份有限公司与张德华先生签署了附条件生效条件的《股份认购合同》</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>9041</td>\n",
" <td>2、2015年12月31日,印纪娱乐传媒股份有限公司与肖文革签订了《印纪娱乐传媒股份有限公司...</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4</th>\n",
" <td>10041</td>\n",
" <td>一、金发科技拟与熊海涛女士签订《股份转让协议》,协议约定:以每股1.0509元的收购价格,收...</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" id sentence\n",
"0 9259 2015年1月26日,多氟多化工股份有限公司与李云峰先生签署了《附条件生效的股份认购合同》\n",
"1 9136 2、2016年2月5日,深圳市新纶科技股份有限公司与侯毅先\n",
"2 220 2015年10月26日,山东华鹏玻璃股份有限公司与张德华先生签署了附条件生效条件的《股份认购合同》\n",
"3 9041 2、2015年12月31日,印纪娱乐传媒股份有限公司与肖文革签订了《印纪娱乐传媒股份有限公司...\n",
"4 10041 一、金发科技拟与熊海涛女士签订《股份转让协议》,协议约定:以每股1.0509元的收购价格,收..."
]
},
"execution_count": 113,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"test_data = pd.read_csv('../data/info_extract/test_data.csv', encoding = 'gb2312', header=0)\n",
"test_data.head()"
]
},
{
"cell_type": "code",
"execution_count": 114,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"((850, 5), (419, 2))"
]
},
"execution_count": 114,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"train_data.shape, test_data.shape"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"我们选取一部分样本进行标注,即train_data,该数据由5列组成。id列表示原始样本序号;sentence列为我们截取的一段关键信息;如果关键信息中存在两个实体之间有股权交易关系则tag列为1,否则为0;如果tag为1,则在member1和member2列会记录两个实体出现在sentence中的名称。\n",
"\n",
"剩下的样本没有标注,即test_data,该数据只有id和sentence两列,希望你能训练模型对test_data中的实体进行识别,并判断实体对之间有没有股权交易关系。\n",
"\n",
"### 练习2:\n",
"将每句句子中实体识别出,存入实体词典,并用特殊符号替换语句。\n"
]
},
{
"cell_type": "code",
"execution_count": 115,
"metadata": {},
"outputs": [],
"source": [
"# 处理test数据,利用开源工具进行实体识别和并使用实体统一函数存储实体\n",
"\n",
"import fool\n",
"import pandas as pd\n",
"from copy import copy\n",
"\n",
"\n",
"test_data = pd.read_csv('../data/info_extract/test_data.csv', encoding = 'gb2312', header=0)\n",
"test_data['ner'] = None\n",
"ner_id = 1001\n",
"ner_dict_new = {} # 存储所有实体\n",
"ner_dict_reverse_new = {} # 存储所有实体"
]
},
{
"cell_type": "code",
"execution_count": 117,
"metadata": {},
"outputs": [],
"source": [
"for i in range(len(test_data)):\n",
" sentence = copy(test_data.iloc[i, 1])\n",
" # TODO:调用fool进行实体识别,得到words和ners结果\n",
" # TODO ...\n",
" # print(sentence)\n",
" words, ners = fool.analysis(sentence)\n",
" \n",
" ners[0].sort(key=lambda x:x[0], reverse=True)\n",
" for start, end, ner_type, ner_name in ners[0]:\n",
" if ner_type=='company' or ner_type=='person':\n",
" \n",
" company_main_name = ner_name \n",
" if ner_type=='company':\n",
" # TODO:调用实体统一函数,存储统一后的实体\n",
" # 并自增ner_id\n",
" # TODO ...\n",
" lst = main_extract(ner_name,stop_word,d_4_delete,d_city_province)\n",
" company_main_name = ''.join(lst) # 对公司名提取主体部分,将包含相同主体部分的公司统一为一个实体\n",
"\n",
" ner_dict_new[company_main_name] = ner_id\n",
" ner_dict_reverse_new[ner_id] = company_main_name\n",
" ner_id += 1 \n",
" \n",
" # 在句子中用编号替换实体名\n",
" sentence = sentence[:start] + ' ner_' + str(ner_dict_new[company_main_name]) + '_ ' + sentence[end:]\n",
" test_data.iloc[i, -1] = sentence\n",
"\n",
"X_test = test_data[['ner']]"
]
},
{
"cell_type": "code",
"execution_count": 118,
"metadata": {},
"outputs": [],
"source": [
"# 处理train数据,利用开源工具进行实体识别和并使用实体统一函数存储实体\n",
"train_data = pd.read_csv('../data/info_extract/train_data.csv', encoding = 'gb2312', header=0)\n",
"train_data['ner'] = None"
]
},
{
"cell_type": "code",
"execution_count": 119,
"metadata": {},
"outputs": [],
"source": [
"for i in range(len(train_data)):\n",
" # 判断正负样本\n",
" if train_data.iloc[i,:]['member1']=='0' and train_data.iloc[i,:]['member2']=='0':\n",
" sentence = copy(train_data.iloc[i, 1])\n",
" # TODO:调用fool进行实体识别,得到words和ners结果\n",
" # TODO ...\n",
" words, ners = fool.analysis(sentence)\n",
" \n",
" ners[0].sort(key=lambda x:x[0], reverse=True)\n",
" for start, end, ner_type, ner_name in ners[0]:\n",
" if ner_type=='company' or ner_type=='person':\n",
" # TODO:调用实体统一函数,存储统一后的实体\n",
" # 并自增ner_id\n",
" # TODO ...\n",
" company_main_name = ner_name \n",
" if ner_type=='company':\n",
" # TODO:调用实体统一函数,存储统一后的实体\n",
" # 并自增ner_id\n",
" # TODO ...\n",
" lst = main_extract(ner_name,stop_word,d_4_delete,d_city_province)\n",
" company_main_name = ''.join(lst) # 对公司名提取主体部分,将包含相同主体部分的公司统一为一个实体\n",
"\n",
" ner_dict_new[company_main_name] = ner_id\n",
" ner_dict_reverse_new[ner_id] = company_main_name\n",
" ner_id += 1 \n",
"\n",
" # 在句子中用编号替换实体名\n",
" sentence = sentence[:start] + ' ner_' + str(ner_dict_new[company_main_name]) + '_ ' + sentence[end-1:]\n",
" train_data.iloc[i, -1] = sentence\n",
" else:\n",
" # 将训练集中正样本已经标注的实体也使用编码替换\n",
" sentence = copy(train_data.iloc[i,:]['sentence'])\n",
" for company_main_name in [train_data.iloc[i,:]['member1'],train_data.iloc[i,:]['member2']]:\n",
" # TODO:调用实体统一函数,存储统一后的实体\n",
" # 并自增ner_id\n",
" # TODO ...\n",
"\n",
" company_main_name = ner_name \n",
" if ner_type=='company':\n",
" # TODO:调用实体统一函数,存储统一后的实体\n",
" # 并自增ner_id\n",
" # TODO ...\n",
" lst = main_extract(ner_name,stop_word,d_4_delete,d_city_province)\n",
" company_main_name = ''.join(lst) # 对公司名提取主体部分,将包含相同主体部分的公司统一为一个实体\n",
"\n",
" ner_dict_new[company_main_name] = ner_id\n",
" ner_dict_reverse_new[ner_id] = company_main_name\n",
" ner_id += 1 \n",
"\n",
" # 在句子中用编号替换实体名\n",
" sentence = re.sub(company_main_name, ' ner_%s_ '%(str(ner_dict_new[company_main_name])), sentence)\n",
" train_data.iloc[i, -1] = sentence\n",
" \n",
"y = train_data.loc[:,['tag']]\n",
"train_num = len(train_data)\n",
"X_train = train_data[['ner']]\n",
"\n",
"# 将train和test放在一起提取特征\n",
"X = pd.concat([X_train, X_test])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 步骤 3:关系抽取\n",
"\n",
"\n",
"目标:借助句法分析工具,和实体识别的结果,以及文本特征,基于训练数据抽取关系,并存储进图数据库。\n",
"\n",
"本次要求抽取股权交易关系,关系为无向边,不要求判断投资方和被投资方,只要求得到双方是否存在交易关系。\n",
"\n",
"模板建立可以使用“正则表达式”、“实体间距离”、“实体上下文”、“依存句法”等。\n",
"\n",
"答案提交在submit目录中,命名为info_extract_submit.csv和info_extract_entity.csv。\n",
"- info_extract_entity.csv格式为:第一列是实体编号,第二列是实体名(实体统一的多个实体名用“|”分隔)\n",
"- info_extract_submit.csv格式为:第一列是关系中实体1的编号,第二列为关系中实体2的编号。\n",
"\n",
"示例:\n",
"- info_extract_entity.csv\n",
"\n",
"| 实体编号 | 实体名 |\n",
"| ------ | ------ |\n",
"| 1001 | 小王 |\n",
"| 1002 | A化工厂 |\n",
"\n",
"- info_extract_submit.csv\n",
"\n",
"| 实体1 | 实体2 |\n",
"| ------ | ------ |\n",
"| 1001 | 1003 |\n",
"| 1002 | 1001 |"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 练习3:提取文本tf-idf特征\n",
"\n",
"去除停用词,并转换成tfidf向量。"
]
},
{
"cell_type": "code",
"execution_count": 120,
"metadata": {},
"outputs": [],
"source": [
"# code\n",
"from sklearn.feature_extraction.text import TfidfTransformer \n",
"from sklearn.feature_extraction.text import CountVectorizer \n",
"from pyltp import Segmentor\n",
"\n",
"\n",
"# 实体符号加入分词词典\n",
"with open('../data/user_dict.txt', 'w') as fw:\n",
" for v in ['一', '二', '三', '四', '五', '六', '七', '八', '九', '十']:\n",
" fw.write( v + '号企业 ni\\n')\n",
"\n",
"# 初始化实例\n",
"segmentor = Segmentor() \n",
"# 加载模型,加载自定义词典\n",
"segmentor.load_with_lexicon('/Users/zbh/data/ltp_data_v3.4.0/cws.model', '../data/user_dict.txt') "
]
},
{
"cell_type": "code",
"execution_count": 121,
"metadata": {},
"outputs": [],
"source": [
"# 加载停用词\n",
"fr = open(r'../data/dict/stopwords.txt', encoding='utf-8') \n",
"stop_word = fr.readlines()\n",
"stop_word = [re.sub(r'(\\r|\\n)*','',stop_word[i]) for i in range(len(stop_word))]\n",
"\n",
"# 分词\n",
"f = lambda x: ' '.join([word for word in segmentor.segment(x) if word not in stop_word and not re.findall(r'ner\\_\\d\\d\\d\\d\\_', word)])\n",
"corpus=X['ner'].map(f).tolist()"
]
},
{
"cell_type": "code",
"execution_count": 127,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['二 委托 贷款 对象 公司 名称 ner 2186_司 注册 地址 新乡市 平原路 新二街 交汇处 国贸 大厦 西北角 法定 代表人 ner 2185_司 类型 责任 公司 注册 资本 人民币 伍仟 肆佰伍 拾 壹万捌仟 捌佰元 经营 房地产 开发 经营',\n",
" '四 关联 交易 合同 公司 ner 2188_司 签署 附 生效 股份 认购 合同 1 股份 认购 ner 2187_司 出资 不 超过 人民币 30亿 元 认购 本次 发行 股份 拟 认购 = 拟 出资额 本次 发行 发行 价格']"
]
},
"execution_count": 127,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"corpus[30:32]"
]
},
{
"cell_type": "code",
"execution_count": 128,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"from sklearn.feature_extraction.text import TfidfVectorizer\n",
"# TODO:提取tfidf特征\n",
"# TODO ...\n",
"tv = TfidfVectorizer(use_idf=True, smooth_idf=True, norm=None)\n",
"tv_fit = tv.fit_transform(corpus)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 练习4:提取句法特征\n",
"除了词语层面的句向量特征,我们还可以从句法入手,提取一些句法分析的特征。\n",
"\n",
"参考特征:\n",
"\n",
"1、企业实体间距离\n",
"\n",
"2、企业实体间句法距离\n",
"\n",
"3、企业实体分别和关键触发词的距离\n",
"\n",
"4、实体的依存关系类别"
]
},
{
"cell_type": "code",
"execution_count": 192,
"metadata": {},
"outputs": [],
"source": [
"# -*- coding: utf-8 -*-\n",
"from pyltp import Parser\n",
"from pyltp import Segmentor\n",
"from pyltp import Postagger\n",
"import networkx as nx\n",
"import pylab\n",
"import re\n",
"\n",
"postagger = Postagger() # 初始化实例\n",
"postagger.load_with_lexicon('/Users/zbh/data/ltp_data_v3.4.0/pos.model', '../data/user_dict.txt') # 加载模型\n",
"segmentor = Segmentor() # 初始化实例\n",
"segmentor.load_with_lexicon('/Users/zbh/data/ltp_data_v3.4.0/cws.model', '../data/user_dict.txt') # 加载模型"
]
},
{
"cell_type": "code",
"execution_count": 193,
"metadata": {},
"outputs": [
{
"ename": "IndentationError",
"evalue": "expected an indented block (<ipython-input-193-1d6b0a11487d>, line 17)",
"output_type": "error",
"traceback": [
"\u001b[0;36m File \u001b[0;32m\"<ipython-input-193-1d6b0a11487d>\"\u001b[0;36m, line \u001b[0;32m17\u001b[0m\n\u001b[0;31m s = s.replace(ner, num_lst[i]+'号企业')\u001b[0m\n\u001b[0m ^\u001b[0m\n\u001b[0;31mIndentationError\u001b[0m\u001b[0;31m:\u001b[0m expected an indented block\n"
]
}
],
"source": [
"def parse(s):\n",
" \"\"\"\n",
" 对语句进行句法分析,并返回句法结果\n",
" \"\"\"\n",
" tmp_ner_dict = {}\n",
" num_lst = ['一', '二', '三', '四', '五', '六', '七', '八', '九', '十']\n",
"\n",
" # 将公司代码替换为特殊称谓,保证分词词性正确\n",
" for i, ner in enumerate(list(set(re.findall(r'(ner\\_\\d\\d\\d\\d\\_)', s)))):\n",
" try:\n",
" tmp_ner_dict[num_lst[i]+'号企业'] = ner\n",
" except IndexError:\n",
" # TODO:定义错误情况的输出\n",
" # TODO ...\n",
" \n",
" \n",
" s = s.replace(ner, num_lst[i]+'号企业')\n",
" words = segmentor.segment(s)\n",
" tags = postagger.postag(words)\n",
" parser = Parser() # 初始化实例\n",
" parser.load('/Users/zbh/data/ltp_data_v3.4.0/parser.model') # 加载模型\n",
" arcs = parser.parse(words, tags) # 句法分析\n",
" arcs_lst = list(map(list, zip(*[[arc.head, arc.relation] for arc in arcs])))\n",
" \n",
" # 句法分析结果输出\n",
" parse_result = pd.DataFrame([[a,b,c,d] for a,b,c,d in zip(list(words),list(tags), arcs_lst[0], arcs_lst[1])], index = range(1,len(words)+1))\n",
" parser.release() # 释放模型\n",
" # TODO:提取企业实体依存句法类型\n",
" # TODO ...\n",
" \n",
" \n",
"\n",
" # 投资关系关键词\n",
" key_words = [\"收购\",\"竞拍\",\"转让\",\"扩张\",\"并购\",\"注资\",\"整合\",\"并入\",\"竞购\",\"竞买\",\"支付\",\"收购价\",\"收购价格\",\"承购\",\"购得\",\"购进\",\n",
" \"购入\",\"买进\",\"买入\",\"赎买\",\"购销\",\"议购\",\"函购\",\"函售\",\"抛售\",\"售卖\",\"销售\",\"转售\"]\n",
" # TODO:*根据关键词和对应句法关系提取特征(如没有思路可以不完成)\n",
" # TODO ...\n",
" \n",
" \n",
" parser.release() # 释放模型\n",
" return your_result"
]
},
{
"cell_type": "code",
"execution_count": 178,
"metadata": {},
"outputs": [],
"source": [
"tmp_ner_dict = {}\n",
"num_lst = ['一', '二', '三', '四', '五', '六', '七', '八', '九', '十']"
]
},
{
"cell_type": "code",
"execution_count": 179,
"metadata": {},
"outputs": [],
"source": [
"s = X['ner'][:1][0]"
]
},
{
"cell_type": "code",
"execution_count": 180,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'与本公司关系:受同一公司控制 2, ner_2126_ 司企业类型: 有限公司注册地址: 富阳市东洲街道东洲工业功能区九号路 1 号 法定代表人: ner_2125_ 明注册资本: ?16,000 万元经营范围: 许可经营项目:制造高效节能感应电机;普通货运。'"
]
},
"execution_count": 180,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"s"
]
},
{
"cell_type": "code",
"execution_count": 189,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"0 ner_2126_\n",
"1 ner_2125_\n"
]
}
],
"source": [
"# 将公司代码替换为特殊称谓,保证分词词性正确\n",
"for i, ner in enumerate(list(set(re.findall(r'(ner\\_\\d\\d\\d\\d\\_)', s)))):\n",
" print(i, ner)\n",
" try:\n",
" tmp_ner_dict[num_lst[i]+'号企业'] = ner\n",
" except IndexError:\n",
" # TODO:定义错误情况的输出\n",
" # TODO ...\n",
" print('ner num is out off range')\n",
" pass\n",
" \n",
" s = s.replace(ner, num_lst[i]+'号企业')"
]
},
{
"cell_type": "code",
"execution_count": 194,
"metadata": {},
"outputs": [],
"source": [
"words = segmentor.segment(s)\n",
"tags = postagger.postag(words)\n",
"parser = Parser() # 初始化实例\n",
"parser.load('/Users/zbh/data/ltp_data_v3.4.0/parser.model') # 加载模型\n",
"arcs = parser.parse(words, tags) # 句法分析\n",
"arcs_lst = list(map(list, zip(*[[arc.head, arc.relation] for arc in arcs])))"
]
},
{
"cell_type": "code",
"execution_count": 196,
"metadata": {},
"outputs": [],
"source": [
"# arcs_lst"
]
},
{
"cell_type": "code",
"execution_count": 197,
"metadata": {},
"outputs": [],
"source": [
"# 句法分析结果输出\n",
"parse_result = pd.DataFrame([[a,b,c,d] for a,b,c,d in zip(list(words),list(tags), arcs_lst[0], arcs_lst[1])], index = range(1,len(words)+1))\n",
"parser.release() # 释放模型\n",
"# TODO:提取企业实体依存句法类型\n",
"# TODO ..."
]
},
{
"cell_type": "code",
"execution_count": 198,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>0</th>\n",
" <th>1</th>\n",
" <th>2</th>\n",
" <th>3</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>与</td>\n",
" <td>p</td>\n",
" <td>6</td>\n",
" <td>ADV</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>本</td>\n",
" <td>r</td>\n",
" <td>3</td>\n",
" <td>ATT</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>公司</td>\n",
" <td>n</td>\n",
" <td>4</td>\n",
" <td>ATT</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4</th>\n",
" <td>关系</td>\n",
" <td>n</td>\n",
" <td>6</td>\n",
" <td>SBV</td>\n",
" </tr>\n",
" <tr>\n",
" <th>5</th>\n",
" <td>:</td>\n",
" <td>wp</td>\n",
" <td>4</td>\n",
" <td>WP</td>\n",
" </tr>\n",
" <tr>\n",
" <th>6</th>\n",
" <td>受</td>\n",
" <td>v</td>\n",
" <td>0</td>\n",
" <td>HED</td>\n",
" </tr>\n",
" <tr>\n",
" <th>7</th>\n",
" <td>同一</td>\n",
" <td>b</td>\n",
" <td>8</td>\n",
" <td>ATT</td>\n",
" </tr>\n",
" <tr>\n",
" <th>8</th>\n",
" <td>公司</td>\n",
" <td>n</td>\n",
" <td>9</td>\n",
" <td>SBV</td>\n",
" </tr>\n",
" <tr>\n",
" <th>9</th>\n",
" <td>控制</td>\n",
" <td>v</td>\n",
" <td>6</td>\n",
" <td>VOB</td>\n",
" </tr>\n",
" <tr>\n",
" <th>10</th>\n",
" <td>2</td>\n",
" <td>m</td>\n",
" <td>9</td>\n",
" <td>VOB</td>\n",
" </tr>\n",
" <tr>\n",
" <th>11</th>\n",
" <td>,</td>\n",
" <td>wp</td>\n",
" <td>6</td>\n",
" <td>WP</td>\n",
" </tr>\n",
" <tr>\n",
" <th>12</th>\n",
" <td>一号企业</td>\n",
" <td>ni</td>\n",
" <td>13</td>\n",
" <td>ATT</td>\n",
" </tr>\n",
" <tr>\n",
" <th>13</th>\n",
" <td>司</td>\n",
" <td>n</td>\n",
" <td>14</td>\n",
" <td>ATT</td>\n",
" </tr>\n",
" <tr>\n",
" <th>14</th>\n",
" <td>企业</td>\n",
" <td>n</td>\n",
" <td>15</td>\n",
" <td>ATT</td>\n",
" </tr>\n",
" <tr>\n",
" <th>15</th>\n",
" <td>类型</td>\n",
" <td>n</td>\n",
" <td>6</td>\n",
" <td>COO</td>\n",
" </tr>\n",
" <tr>\n",
" <th>16</th>\n",
" <td>:</td>\n",
" <td>wp</td>\n",
" <td>15</td>\n",
" <td>WP</td>\n",
" </tr>\n",
" <tr>\n",
" <th>17</th>\n",
" <td>有限公司</td>\n",
" <td>n</td>\n",
" <td>19</td>\n",
" <td>ATT</td>\n",
" </tr>\n",
" <tr>\n",
" <th>18</th>\n",
" <td>注册</td>\n",
" <td>v</td>\n",
" <td>19</td>\n",
" <td>ATT</td>\n",
" </tr>\n",
" <tr>\n",
" <th>19</th>\n",
" <td>地址</td>\n",
" <td>n</td>\n",
" <td>15</td>\n",
" <td>COO</td>\n",
" </tr>\n",
" <tr>\n",
" <th>20</th>\n",
" <td>:</td>\n",
" <td>wp</td>\n",
" <td>19</td>\n",
" <td>WP</td>\n",
" </tr>\n",
" <tr>\n",
" <th>21</th>\n",
" <td>富阳市</td>\n",
" <td>ns</td>\n",
" <td>22</td>\n",
" <td>ATT</td>\n",
" </tr>\n",
" <tr>\n",
" <th>22</th>\n",
" <td>东洲</td>\n",
" <td>ns</td>\n",
" <td>23</td>\n",
" <td>ATT</td>\n",
" </tr>\n",
" <tr>\n",
" <th>23</th>\n",
" <td>街道</td>\n",
" <td>n</td>\n",
" <td>24</td>\n",
" <td>ATT</td>\n",
" </tr>\n",
" <tr>\n",
" <th>24</th>\n",
" <td>东洲</td>\n",
" <td>ns</td>\n",
" <td>25</td>\n",
" <td>ATT</td>\n",
" </tr>\n",
" <tr>\n",
" <th>25</th>\n",
" <td>工业</td>\n",
" <td>n</td>\n",
" <td>26</td>\n",
" <td>ATT</td>\n",
" </tr>\n",
" <tr>\n",
" <th>26</th>\n",
" <td>功能</td>\n",
" <td>n</td>\n",
" <td>27</td>\n",
" <td>ATT</td>\n",
" </tr>\n",
" <tr>\n",
" <th>27</th>\n",
" <td>区九号路</td>\n",
" <td>n</td>\n",
" <td>31</td>\n",
" <td>ATT</td>\n",
" </tr>\n",
" <tr>\n",
" <th>28</th>\n",
" <td>1</td>\n",
" <td>m</td>\n",
" <td>29</td>\n",
" <td>ATT</td>\n",
" </tr>\n",
" <tr>\n",
" <th>29</th>\n",
" <td>号</td>\n",
" <td>q</td>\n",
" <td>31</td>\n",
" <td>ATT</td>\n",
" </tr>\n",
" <tr>\n",
" <th>30</th>\n",
" <td>法定</td>\n",
" <td>b</td>\n",
" <td>31</td>\n",
" <td>ATT</td>\n",
" </tr>\n",
" <tr>\n",
" <th>31</th>\n",
" <td>代表人</td>\n",
" <td>n</td>\n",
" <td>19</td>\n",
" <td>COO</td>\n",
" </tr>\n",
" <tr>\n",
" <th>32</th>\n",
" <td>:</td>\n",
" <td>wp</td>\n",
" <td>31</td>\n",
" <td>WP</td>\n",
" </tr>\n",
" <tr>\n",
" <th>33</th>\n",
" <td>二号企业</td>\n",
" <td>ni</td>\n",
" <td>36</td>\n",
" <td>ATT</td>\n",
" </tr>\n",
" <tr>\n",
" <th>34</th>\n",
" <td>明</td>\n",
" <td>d</td>\n",
" <td>36</td>\n",
" <td>ATT</td>\n",
" </tr>\n",
" <tr>\n",
" <th>35</th>\n",
" <td>注册</td>\n",
" <td>v</td>\n",
" <td>36</td>\n",
" <td>ATT</td>\n",
" </tr>\n",
" <tr>\n",
" <th>36</th>\n",
" <td>资本</td>\n",
" <td>n</td>\n",
" <td>50</td>\n",
" <td>SBV</td>\n",
" </tr>\n",
" <tr>\n",
" <th>37</th>\n",
" <td>:</td>\n",
" <td>wp</td>\n",
" <td>36</td>\n",
" <td>WP</td>\n",
" </tr>\n",
" <tr>\n",
" <th>38</th>\n",
" <td>?</td>\n",
" <td>wp</td>\n",
" <td>36</td>\n",
" <td>WP</td>\n",
" </tr>\n",
" <tr>\n",
" <th>39</th>\n",
" <td>16</td>\n",
" <td>m</td>\n",
" <td>50</td>\n",
" <td>ADV</td>\n",
" </tr>\n",
" <tr>\n",
" <th>40</th>\n",
" <td>,</td>\n",
" <td>wp</td>\n",
" <td>39</td>\n",
" <td>WP</td>\n",
" </tr>\n",
" <tr>\n",
" <th>41</th>\n",
" <td>000万</td>\n",
" <td>m</td>\n",
" <td>42</td>\n",
" <td>ATT</td>\n",
" </tr>\n",
" <tr>\n",
" <th>42</th>\n",
" <td>元</td>\n",
" <td>q</td>\n",
" <td>44</td>\n",
" <td>ATT</td>\n",
" </tr>\n",
" <tr>\n",
" <th>43</th>\n",
" <td>经营</td>\n",
" <td>v</td>\n",
" <td>44</td>\n",
" <td>ATT</td>\n",
" </tr>\n",
" <tr>\n",
" <th>44</th>\n",
" <td>范围</td>\n",
" <td>n</td>\n",
" <td>50</td>\n",
" <td>ADV</td>\n",
" </tr>\n",
" <tr>\n",
" <th>45</th>\n",
" <td>:</td>\n",
" <td>wp</td>\n",
" <td>44</td>\n",
" <td>WP</td>\n",
" </tr>\n",
" <tr>\n",
" <th>46</th>\n",
" <td>许可</td>\n",
" <td>v</td>\n",
" <td>48</td>\n",
" <td>ATT</td>\n",
" </tr>\n",
" <tr>\n",
" <th>47</th>\n",
" <td>经营</td>\n",
" <td>v</td>\n",
" <td>48</td>\n",
" <td>ATT</td>\n",
" </tr>\n",
" <tr>\n",
" <th>48</th>\n",
" <td>项目</td>\n",
" <td>n</td>\n",
" <td>50</td>\n",
" <td>SBV</td>\n",
" </tr>\n",
" <tr>\n",
" <th>49</th>\n",
" <td>:</td>\n",
" <td>wp</td>\n",
" <td>48</td>\n",
" <td>WP</td>\n",
" </tr>\n",
" <tr>\n",
" <th>50</th>\n",
" <td>制造</td>\n",
" <td>v</td>\n",
" <td>31</td>\n",
" <td>COO</td>\n",
" </tr>\n",
" <tr>\n",
" <th>51</th>\n",
" <td>高效</td>\n",
" <td>b</td>\n",
" <td>54</td>\n",
" <td>ATT</td>\n",
" </tr>\n",
" <tr>\n",
" <th>52</th>\n",
" <td>节能</td>\n",
" <td>v</td>\n",
" <td>54</td>\n",
" <td>ATT</td>\n",
" </tr>\n",
" <tr>\n",
" <th>53</th>\n",
" <td>感应</td>\n",
" <td>n</td>\n",
" <td>54</td>\n",
" <td>ATT</td>\n",
" </tr>\n",
" <tr>\n",
" <th>54</th>\n",
" <td>电机</td>\n",
" <td>n</td>\n",
" <td>50</td>\n",
" <td>VOB</td>\n",
" </tr>\n",
" <tr>\n",
" <th>55</th>\n",
" <td>;</td>\n",
" <td>wp</td>\n",
" <td>50</td>\n",
" <td>WP</td>\n",
" </tr>\n",
" <tr>\n",
" <th>56</th>\n",
" <td>普通</td>\n",
" <td>a</td>\n",
" <td>57</td>\n",
" <td>ATT</td>\n",
" </tr>\n",
" <tr>\n",
" <th>57</th>\n",
" <td>货运</td>\n",
" <td>n</td>\n",
" <td>50</td>\n",
" <td>COO</td>\n",
" </tr>\n",
" <tr>\n",
" <th>58</th>\n",
" <td>。</td>\n",
" <td>wp</td>\n",
" <td>6</td>\n",
" <td>WP</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" 0 1 2 3\n",
"1 与 p 6 ADV\n",
"2 本 r 3 ATT\n",
"3 公司 n 4 ATT\n",
"4 关系 n 6 SBV\n",
"5 : wp 4 WP\n",
"6 受 v 0 HED\n",
"7 同一 b 8 ATT\n",
"8 公司 n 9 SBV\n",
"9 控制 v 6 VOB\n",
"10 2 m 9 VOB\n",
"11 , wp 6 WP\n",
"12 一号企业 ni 13 ATT\n",
"13 司 n 14 ATT\n",
"14 企业 n 15 ATT\n",
"15 类型 n 6 COO\n",
"16 : wp 15 WP\n",
"17 有限公司 n 19 ATT\n",
"18 注册 v 19 ATT\n",
"19 地址 n 15 COO\n",
"20 : wp 19 WP\n",
"21 富阳市 ns 22 ATT\n",
"22 东洲 ns 23 ATT\n",
"23 街道 n 24 ATT\n",
"24 东洲 ns 25 ATT\n",
"25 工业 n 26 ATT\n",
"26 功能 n 27 ATT\n",
"27 区九号路 n 31 ATT\n",
"28 1 m 29 ATT\n",
"29 号 q 31 ATT\n",
"30 法定 b 31 ATT\n",
"31 代表人 n 19 COO\n",
"32 : wp 31 WP\n",
"33 二号企业 ni 36 ATT\n",
"34 明 d 36 ATT\n",
"35 注册 v 36 ATT\n",
"36 资本 n 50 SBV\n",
"37 : wp 36 WP\n",
"38 ? wp 36 WP\n",
"39 16 m 50 ADV\n",
"40 , wp 39 WP\n",
"41 000万 m 42 ATT\n",
"42 元 q 44 ATT\n",
"43 经营 v 44 ATT\n",
"44 范围 n 50 ADV\n",
"45 : wp 44 WP\n",
"46 许可 v 48 ATT\n",
"47 经营 v 48 ATT\n",
"48 项目 n 50 SBV\n",
"49 : wp 48 WP\n",
"50 制造 v 31 COO\n",
"51 高效 b 54 ATT\n",
"52 节能 v 54 ATT\n",
"53 感应 n 54 ATT\n",
"54 电机 n 50 VOB\n",
"55 ; wp 50 WP\n",
"56 普通 a 57 ATT\n",
"57 货运 n 50 COO\n",
"58 。 wp 6 WP"
]
},
"execution_count": 198,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"parse_result"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# 投资关系关键词\n",
"key_words = [\"收购\",\"竞拍\",\"转让\",\"扩张\",\"并购\",\"注资\",\"整合\",\"并入\",\"竞购\",\"竞买\",\"支付\",\"收购价\",\"收购价格\",\"承购\",\"购得\",\"购进\",\n",
" \"购入\",\"买进\",\"买入\",\"赎买\",\"购销\",\"议购\",\"函购\",\"函售\",\"抛售\",\"售卖\",\"销售\",\"转售\"]\n",
"# TODO:*根据关键词和对应句法关系提取特征(如没有思路可以不完成)\n",
"# TODO ...\n",
"\n",
"\n",
"parser.release() # 释放模型\n",
"return your_result"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def shortest_path(arcs_ret, source, target):\n",
" \"\"\"\n",
" 求出两个词最短依存句法路径,不存在路径返回-1\n",
" arcs_ret:句法分析结果\n",
" source:实体1\n",
" target:实体2\n",
" \"\"\"\n",
" G=nx.DiGraph()\n",
" # 为这个网络添加节点...\n",
" for i in list(arcs_ret.index):\n",
" G.add_node(i)\n",
" # TODO:在网络中添加带权中的边...(注意,我们需要的是无向边)\n",
" # TODO ...\n",
" \n",
"\n",
" try:\n",
" # TODO:利用nx包中shortest_path_length方法实现最短距离提取\n",
" # TODO ...\n",
" \n",
" \n",
" return distance\n",
" except:\n",
" return -1"
]
},
{
"cell_type": "code",
"execution_count": 29,
"metadata": {
"collapsed": true,
"jupyter": {
"outputs_hidden": true
}
},
"outputs": [],
"source": [
"def get_feature(s):\n",
" \"\"\"\n",
" 汇总上述函数汇总句法分析特征与TFIDF特征\n",
" \"\"\"\n",
" # TODO:汇总上述函数汇总句法分析特征与TFIDF特征\n",
" # TODO ...\n",
" \n",
" \n",
" return features\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 练习5:建立分类器\n",
"\n",
"利用已经提取好的tfidf特征以及parse特征,建立分类器进行分类任务。"
]
},
{
"cell_type": "code",
"execution_count": 51,
"metadata": {
"collapsed": true,
"jupyter": {
"outputs_hidden": true
}
},
"outputs": [],
"source": [
"# 建立分类器进行分类\n",
"from sklearn.ensemble import RandomForestClassifier\n",
"from sklearn import preprocessing\n",
"from sklearn.model_selection import train_test_split\n",
"from sklearn.linear_model import LogisticRegression\n",
"from sklearn.model_selection import GridSearchCV\n",
"\n",
"# TODO:定义需要遍历的参数\n",
"\n",
"\n",
"# TODO:选择模型\n",
"\n",
"\n",
"# TODO:利用GridSearchCV搜索最佳参数\n",
"\n",
"\n",
"# TODO:对Test_data进行分类\n",
"\n",
"\n",
"\n",
"# TODO:保存Test_data分类结果\n",
"# 答案提交在submit目录中,命名为info_extract_submit.csv和info_extract_entity.csv。\n",
"# info_extract_entity.csv格式为:第一列是实体编号,第二列是实体名(实体统一的多个实体名用“|”分隔)\n",
"# info_extract_submit.csv格式为:第一列是关系中实体1的编号,第二列为关系中实体2的编号。\n",
"\n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 练习6:操作图数据库\n",
"对关系最好的描述就是用图,那这里就需要使用图数据库,目前最常用的图数据库是noe4j,通过cypher语句就可以操作图数据库的增删改查。可以参考“https://cuiqingcai.com/4778.html”。\n",
"\n",
"本次作业我们使用neo4j作为图数据库,neo4j需要java环境,请先配置好环境。\n",
"\n",
"将我们提出的实体关系插入图数据库,并查询某节点的3层投资关系,即三个节点组成的路径(如果有的话)。如果无法找到3层投资关系,请查询出任意指定节点的投资路径。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true,
"jupyter": {
"outputs_hidden": true
}
},
"outputs": [],
"source": [
"\n",
"from py2neo import Node, Relationship, Graph\n",
"\n",
"graph = Graph(\n",
" \"http://localhost:7474\", \n",
" username=\"neo4j\", \n",
" password=\"person\"\n",
")\n",
"\n",
"for v in relation_list:\n",
" a = Node('Company', name=v[0])\n",
" b = Node('Company', name=v[1])\n",
" \n",
" # 本次不区分投资方和被投资方,无向图\n",
" r = Relationship(a, 'INVEST', b)\n",
" s = a | b | r\n",
" graph.create(s)\n",
" r = Relationship(b, 'INVEST', a)\n",
" s = a | b | r\n",
" graph.create(s)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true,
"jupyter": {
"outputs_hidden": true
}
},
"outputs": [],
"source": [
"# TODO:查询某节点的3层投资关系\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 步骤4:实体消歧\n",
"解决了实体识别和关系的提取,我们已经完成了一大截,但是我们提取的实体究竟对应知识库中哪个实体呢?下图中,光是“苹果”就对应了13个同名实体。\n",
"<img src=\"../image/baike2.png\", width=340, heigth=480>\n",
"\n",
"在这个问题上,实体消歧旨在解决文本中广泛存在的名称歧义问题,将句中识别的实体与知识库中实体进行匹配,解决实体歧义问题。\n",
"\n",
"\n",
"### 练习7:\n",
"匹配test_data.csv中前25条样本中的人物实体对应的百度百科URL(此部分样本中所有人名均可在百度百科中链接到)。\n",
"\n",
"利用scrapy、beautifulsoup、request等python包对百度百科进行爬虫,判断是否具有一词多义的情况,如果有的话,选择最佳实体进行匹配。\n",
"\n",
"使用URL为‘https://baike.baidu.com/item/’+人名 可以访问百度百科该人名的词条,此处需要根据爬取到的网页识别该词条是否对应多个实体,如下图:\n",
"<img src=\"../image/baike1.png\", width=440, heigth=480>\n",
"如果该词条有对应多个实体,请返回正确匹配的实体URL,例如该示例网页中的‘https://baike.baidu.com/item/陆永/20793929’。\n",
"\n",
"- 提交文件:entity_disambiguation_submit.csv\n",
"- 提交格式:第一列为实体id(与info_extract_submit.csv中id保持一致),第二列为对应URL。\n",
"- 示例:\n",
"\n",
"| 实体编号 | URL |\n",
"| ------ | ------ |\n",
"| 1001 | https://baike.baidu.com/item/陆永/20793929 |\n",
"| 1002 | https://baike.baidu.com/item/王芳/567232 |\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true,
"jupyter": {
"outputs_hidden": true
}
},
"outputs": [],
"source": [
"import jieba\n",
"import pandas as pd\n",
"\n",
"# 找出test_data.csv中前25条样本所有的人物名称,以及人物所在文档的上下文内容\n",
"test_data = pd.read_csv('../data/info_extract/test_data.csv', encoding = 'gb2312', header=0)\n",
"\n",
"# 存储人物以及上下文信息(key为人物ID,value为人物名称、人物上下文内容)\n",
"person_name = {}\n",
"\n",
"# 观察上下文的窗口大小\n",
"window = 10 \n",
"\n",
"# 遍历前25条样本\n",
"for i in range(25):\n",
" sentence = copy(test_data.iloc[i, 1])\n",
" words, ners = fool.analysis(sentence)\n",
" ners[0].sort(key=lambda x:x[0], reverse=True)\n",
" for start, end, ner_type, ner_name in ners[0]:\n",
" if ner_type=='person':\n",
" # TODO:提取实体的上下文\n",
" \n",
"\n",
"\n",
"\n",
"# 利用爬虫得到每个人物名称对应的URL\n",
"# TODO:找到每个人物实体的词条内容。\n",
"\n",
"# TODO:将样本中人物上下文与爬取词条结果进行对比,选择最接近的词条。\n",
"\n",
"\n",
"\n",
"# 输出结果\n",
"pd.DataFrame(result_data).to_csv('../submit/entity_disambiguation_submit.csv', index=False)\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "py36",
"language": "python",
"name": "py36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.12"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment