Commit 7ef347aa by 20220116006

new file: project1-douban/douban_starter.ipynb

parents
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"tags": []
},
"source": [
"### 豆瓣评分的预测\n",
"\n",
"在这个项目中,我们要预测一部电影的评分,这个问题实际上就是一个分类问题。给定的输入为一段文本,输出为具体的评分。 在这个项目中,我们需要做:\n",
"- 文本的预处理,如停用词的过滤,低频词的过滤,特殊符号的过滤等\n",
"- 文本转化成向量,将使用三种方式,分别为tf-idf, word2vec以及BERT向量。 \n",
"- 训练逻辑回归和朴素贝叶斯模型,并做交叉验证\n",
"- 评估模型的准确率\n",
"\n",
"在具体标记为``TODO``的部分填写相应的代码。 "
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"#导入数据处理的基础包\n",
"import numpy as np\n",
"import pandas as pd\n",
"\n",
"#导入用于计数的包\n",
"from collections import Counter\n",
"\n",
"#导入tf-idf相关的包\n",
"from sklearn.feature_extraction.text import TfidfTransformer \n",
"from sklearn.feature_extraction.text import CountVectorizer\n",
"\n",
"#导入模型评估的包\n",
"from sklearn import metrics\n",
"\n",
"#导入与word2vec相关的包\n",
"from gensim.models import KeyedVectors\n",
"\n",
"#导入与bert embedding相关的包,关于mxnet包下载的注意事项参考实验手册\n",
"from bert_embedding import BertEmbedding\n",
"import mxnet\n",
"\n",
"#包tqdm是用来对可迭代对象执行时生成一个进度条用以监视程序运行过程\n",
"from tqdm import tqdm\n",
"\n",
"#导入其他一些功能包\n",
"import requests\n",
"import os\n",
"import re\n",
"import warnings\n",
"warnings.filterwarnings(\"ignore\")"
]
},
{
"cell_type": "code",
"execution_count": 38,
"metadata": {},
"outputs": [],
"source": [
"# 由于bert嵌入无法使用GPU进行加速,所以这里使用了比例系数K来减小数据量\n",
"K = 5000"
]
},
{
"cell_type": "markdown",
"metadata": {
"tags": []
},
"source": [
"### 1. 读取数据并做文本的处理\n",
"你需要完成以下几步操作:\n",
"- 去掉无用的字符如!&,可自行定义\n",
"- 中文分词\n",
"- 去掉低频词"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>ID</th>\n",
" <th>Movie_Name_EN</th>\n",
" <th>Movie_Name_CN</th>\n",
" <th>Crawl_Date</th>\n",
" <th>Number</th>\n",
" <th>Username</th>\n",
" <th>Date</th>\n",
" <th>Star</th>\n",
" <th>Comment</th>\n",
" <th>Like</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>0</td>\n",
" <td>Avengers Age of Ultron</td>\n",
" <td>复仇者联盟2</td>\n",
" <td>2017-01-22</td>\n",
" <td>1</td>\n",
" <td>然潘</td>\n",
" <td>2015-05-13</td>\n",
" <td>3</td>\n",
" <td>连奥创都知道整容要去韩国。</td>\n",
" <td>2404</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>10</td>\n",
" <td>Avengers Age of Ultron</td>\n",
" <td>复仇者联盟2</td>\n",
" <td>2017-01-22</td>\n",
" <td>11</td>\n",
" <td>影志</td>\n",
" <td>2015-04-30</td>\n",
" <td>4</td>\n",
" <td>“一个没有黑暗面的人不值得信任。” 第二部剥去冗长的铺垫,开场即高潮、一直到结束,会有人觉...</td>\n",
" <td>381</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>20</td>\n",
" <td>Avengers Age of Ultron</td>\n",
" <td>复仇者联盟2</td>\n",
" <td>2017-01-22</td>\n",
" <td>21</td>\n",
" <td>随时流感</td>\n",
" <td>2015-04-28</td>\n",
" <td>2</td>\n",
" <td>奥创弱爆了弱爆了弱爆了啊!!!!!!</td>\n",
" <td>120</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>30</td>\n",
" <td>Avengers Age of Ultron</td>\n",
" <td>复仇者联盟2</td>\n",
" <td>2017-01-22</td>\n",
" <td>31</td>\n",
" <td>乌鸦火堂</td>\n",
" <td>2015-05-08</td>\n",
" <td>4</td>\n",
" <td>与第一集不同,承上启下,阴郁严肃,但也不会不好看啊,除非本来就不喜欢漫威电影。场面更加宏大...</td>\n",
" <td>30</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4</th>\n",
" <td>40</td>\n",
" <td>Avengers Age of Ultron</td>\n",
" <td>复仇者联盟2</td>\n",
" <td>2017-01-22</td>\n",
" <td>41</td>\n",
" <td>办公室甜心</td>\n",
" <td>2015-05-10</td>\n",
" <td>5</td>\n",
" <td>看毕,我激动地对友人说,等等奥创要来毁灭台北怎么办厚,她拍了拍我肩膀,没事,反正你买了两份...</td>\n",
" <td>16</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" ID Movie_Name_EN Movie_Name_CN Crawl_Date Number Username \\\n",
"0 0 Avengers Age of Ultron 复仇者联盟2 2017-01-22 1 然潘 \n",
"1 10 Avengers Age of Ultron 复仇者联盟2 2017-01-22 11 影志 \n",
"2 20 Avengers Age of Ultron 复仇者联盟2 2017-01-22 21 随时流感 \n",
"3 30 Avengers Age of Ultron 复仇者联盟2 2017-01-22 31 乌鸦火堂 \n",
"4 40 Avengers Age of Ultron 复仇者联盟2 2017-01-22 41 办公室甜心 \n",
"\n",
" Date Star Comment Like \n",
"0 2015-05-13 3 连奥创都知道整容要去韩国。 2404 \n",
"1 2015-04-30 4 “一个没有黑暗面的人不值得信任。” 第二部剥去冗长的铺垫,开场即高潮、一直到结束,会有人觉... 381 \n",
"2 2015-04-28 2 奥创弱爆了弱爆了弱爆了啊!!!!!! 120 \n",
"3 2015-05-08 4 与第一集不同,承上启下,阴郁严肃,但也不会不好看啊,除非本来就不喜欢漫威电影。场面更加宏大... 30 \n",
"4 2015-05-10 5 看毕,我激动地对友人说,等等奥创要来毁灭台北怎么办厚,她拍了拍我肩膀,没事,反正你买了两份... 16 "
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"#读取数据\n",
"data = pd.read_csv('data/DMSC.csv')\n",
"#观察数据格式\n",
"data.head()"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"<class 'pandas.core.frame.DataFrame'>\n",
"RangeIndex: 212506 entries, 0 to 212505\n",
"Data columns (total 10 columns):\n",
" # Column Non-Null Count Dtype \n",
"--- ------ -------------- ----- \n",
" 0 ID 212506 non-null int64 \n",
" 1 Movie_Name_EN 212506 non-null object\n",
" 2 Movie_Name_CN 212506 non-null object\n",
" 3 Crawl_Date 212506 non-null object\n",
" 4 Number 212506 non-null int64 \n",
" 5 Username 212496 non-null object\n",
" 6 Date 212506 non-null object\n",
" 7 Star 212506 non-null int64 \n",
" 8 Comment 212506 non-null object\n",
" 9 Like 212506 non-null int64 \n",
"dtypes: int64(4), object(6)\n",
"memory usage: 16.2+ MB\n"
]
}
],
"source": [
"#输出数据的一些相关信息\n",
"data.info()"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>Comment</th>\n",
" <th>Star</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>连奥创都知道整容要去韩国。</td>\n",
" <td>3</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>“一个没有黑暗面的人不值得信任。” 第二部剥去冗长的铺垫,开场即高潮、一直到结束,会有人觉...</td>\n",
" <td>4</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>奥创弱爆了弱爆了弱爆了啊!!!!!!</td>\n",
" <td>2</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>与第一集不同,承上启下,阴郁严肃,但也不会不好看啊,除非本来就不喜欢漫威电影。场面更加宏大...</td>\n",
" <td>4</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4</th>\n",
" <td>看毕,我激动地对友人说,等等奥创要来毁灭台北怎么办厚,她拍了拍我肩膀,没事,反正你买了两份...</td>\n",
" <td>5</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" Comment Star\n",
"0 连奥创都知道整容要去韩国。 3\n",
"1 “一个没有黑暗面的人不值得信任。” 第二部剥去冗长的铺垫,开场即高潮、一直到结束,会有人觉... 4\n",
"2 奥创弱爆了弱爆了弱爆了啊!!!!!! 2\n",
"3 与第一集不同,承上启下,阴郁严肃,但也不会不好看啊,除非本来就不喜欢漫威电影。场面更加宏大... 4\n",
"4 看毕,我激动地对友人说,等等奥创要来毁灭台北怎么办厚,她拍了拍我肩膀,没事,反正你买了两份... 5"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"#只保留数据中我们需要的两列:Comment列和Star列\n",
"data = data[['Comment','Star']]\n",
"#观察新的数据的格式\n",
"data.head()"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>Comment</th>\n",
" <th>Star</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>连奥创都知道整容要去韩国。</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>“一个没有黑暗面的人不值得信任。” 第二部剥去冗长的铺垫,开场即高潮、一直到结束,会有人觉...</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>奥创弱爆了弱爆了弱爆了啊!!!!!!</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>与第一集不同,承上启下,阴郁严肃,但也不会不好看啊,除非本来就不喜欢漫威电影。场面更加宏大...</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4</th>\n",
" <td>看毕,我激动地对友人说,等等奥创要来毁灭台北怎么办厚,她拍了拍我肩膀,没事,反正你买了两份...</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>5</th>\n",
" <td>绝逼不质疑尾灯的导演和编剧水平</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>6</th>\n",
" <td>avengers1睡着1次 avengers2睡着两次。。。</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>7</th>\n",
" <td>谁再喊我看这种电影我和谁急!实在是接受无能。。。</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>8</th>\n",
" <td>超愉悦以及超满足。在历经了第一阶段比漫画更普世的设定融合之后,发展到#AoU#居然出现了不...</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>9</th>\n",
" <td>观影过程中,耳边一直有一种突突突突突的声音,我还感慨电影为了让奥创给观众带来紧张感,声音上...</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>10</th>\n",
" <td>Long takes, no stakes. 最后大战灾难性得乱 olsen到底什么能力完...</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>11</th>\n",
" <td>视觉效果的极限是视觉疲劳</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>12</th>\n",
" <td>感觉有略黑暗了点,不过还是萌点满满,但是一想到就要完结了又心碎了一地,,,,</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>13</th>\n",
" <td>妇联成员都只会讲不好笑的笑话,唯一加分的是朱莉·德培</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>14</th>\n",
" <td>只算還OK的商業片。現在這類片型第一品牌就是漫威了,熱鬧打鬥大場面,人神機甲齊飛,各型超級...</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>15</th>\n",
" <td>好看!好看!好看!</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>16</th>\n",
" <td>难看一笔</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>17</th>\n",
" <td>6/10。第一部精准的节奏、巧妙的悬念和清楚的内心戏不见了,或许导演不想把超级英雄打造成战...</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>18</th>\n",
" <td>欧洲竟然真的是最早上映啊= =法国比美国还早一周……没怎么看懂的我想找科普说明都不容易!嘛...</td>\n",
" <td>1</td>\n",
" </tr>\n",
" <tr>\n",
" <th>19</th>\n",
" <td>我是美队的忠实脑!残!粉!!!!!!!!!</td>\n",
" <td>1</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" Comment Star\n",
"0 连奥创都知道整容要去韩国。 1\n",
"1 “一个没有黑暗面的人不值得信任。” 第二部剥去冗长的铺垫,开场即高潮、一直到结束,会有人觉... 1\n",
"2 奥创弱爆了弱爆了弱爆了啊!!!!!! 0\n",
"3 与第一集不同,承上启下,阴郁严肃,但也不会不好看啊,除非本来就不喜欢漫威电影。场面更加宏大... 1\n",
"4 看毕,我激动地对友人说,等等奥创要来毁灭台北怎么办厚,她拍了拍我肩膀,没事,反正你买了两份... 1\n",
"5 绝逼不质疑尾灯的导演和编剧水平 1\n",
"6 avengers1睡着1次 avengers2睡着两次。。。 0\n",
"7 谁再喊我看这种电影我和谁急!实在是接受无能。。。 0\n",
"8 超愉悦以及超满足。在历经了第一阶段比漫画更普世的设定融合之后,发展到#AoU#居然出现了不... 1\n",
"9 观影过程中,耳边一直有一种突突突突突的声音,我还感慨电影为了让奥创给观众带来紧张感,声音上... 1\n",
"10 Long takes, no stakes. 最后大战灾难性得乱 olsen到底什么能力完... 1\n",
"11 视觉效果的极限是视觉疲劳 1\n",
"12 感觉有略黑暗了点,不过还是萌点满满,但是一想到就要完结了又心碎了一地,,,, 1\n",
"13 妇联成员都只会讲不好笑的笑话,唯一加分的是朱莉·德培 0\n",
"14 只算還OK的商業片。現在這類片型第一品牌就是漫威了,熱鬧打鬥大場面,人神機甲齊飛,各型超級... 1\n",
"15 好看!好看!好看! 1\n",
"16 难看一笔 0\n",
"17 6/10。第一部精准的节奏、巧妙的悬念和清楚的内心戏不见了,或许导演不想把超级英雄打造成战... 1\n",
"18 欧洲竟然真的是最早上映啊= =法国比美国还早一周……没怎么看懂的我想找科普说明都不容易!嘛... 1\n",
"19 我是美队的忠实脑!残!粉!!!!!!!!! 1"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# 这里的star代表具体的评分。但在这个项目中,我们要预测的是正面还是负面。我们把评分为1和2的看作是负面,把评分为3,4,5的作为正面\n",
"data['Star']=(data.Star/3).astype(int)\n",
"data.head(20)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 任务1: 去掉一些无用的字符"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>Comment</th>\n",
" <th>Star</th>\n",
" <th>comment_clean</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>连奥创都知道整容要去韩国。</td>\n",
" <td>1</td>\n",
" <td>连奥创都知道整容要去韩国</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>“一个没有黑暗面的人不值得信任。” 第二部剥去冗长的铺垫,开场即高潮、一直到结束,会有人觉...</td>\n",
" <td>1</td>\n",
" <td>一个没有黑暗面的人不值得信任 第二部剥去冗长的铺垫 开场即高潮 一直到结束 会有人觉得只...</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>奥创弱爆了弱爆了弱爆了啊!!!!!!</td>\n",
" <td>0</td>\n",
" <td>奥创弱爆了弱爆了弱爆了啊</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>与第一集不同,承上启下,阴郁严肃,但也不会不好看啊,除非本来就不喜欢漫威电影。场面更加宏大...</td>\n",
" <td>1</td>\n",
" <td>与第一集不同 承上启下 阴郁严肃 但也不会不好看啊 除非本来就不喜欢漫威电影 场面更加宏大 ...</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4</th>\n",
" <td>看毕,我激动地对友人说,等等奥创要来毁灭台北怎么办厚,她拍了拍我肩膀,没事,反正你买了两份...</td>\n",
" <td>1</td>\n",
" <td>看毕 我激动地对友人说 等等奥创要来毁灭台北怎么办厚 她拍了拍我肩膀 没事 反正你买了两份旅...</td>\n",
" </tr>\n",
" <tr>\n",
" <th>5</th>\n",
" <td>绝逼不质疑尾灯的导演和编剧水平</td>\n",
" <td>1</td>\n",
" <td>绝逼不质疑尾灯的导演和编剧水平</td>\n",
" </tr>\n",
" <tr>\n",
" <th>6</th>\n",
" <td>avengers1睡着1次 avengers2睡着两次。。。</td>\n",
" <td>0</td>\n",
" <td>睡着 次 睡着两次</td>\n",
" </tr>\n",
" <tr>\n",
" <th>7</th>\n",
" <td>谁再喊我看这种电影我和谁急!实在是接受无能。。。</td>\n",
" <td>0</td>\n",
" <td>谁再喊我看这种电影我和谁急 实在是接受无能</td>\n",
" </tr>\n",
" <tr>\n",
" <th>8</th>\n",
" <td>超愉悦以及超满足。在历经了第一阶段比漫画更普世的设定融合之后,发展到#AoU#居然出现了不...</td>\n",
" <td>1</td>\n",
" <td>超愉悦以及超满足 在历经了第一阶段比漫画更普世的设定融合之后 发展到 居然出现了不少传统...</td>\n",
" </tr>\n",
" <tr>\n",
" <th>9</th>\n",
" <td>观影过程中,耳边一直有一种突突突突突的声音,我还感慨电影为了让奥创给观众带来紧张感,声音上...</td>\n",
" <td>1</td>\n",
" <td>观影过程中 耳边一直有一种突突突突突的声音 我还感慨电影为了让奥创给观众带来紧张感 声音上真...</td>\n",
" </tr>\n",
" <tr>\n",
" <th>10</th>\n",
" <td>Long takes, no stakes. 最后大战灾难性得乱 olsen到底什么能力完...</td>\n",
" <td>1</td>\n",
" <td>最后大战灾难性得乱 到底什么能力完全没明白 是巴菲里的 其实剧本没那么差 美国例外论的主...</td>\n",
" </tr>\n",
" <tr>\n",
" <th>11</th>\n",
" <td>视觉效果的极限是视觉疲劳</td>\n",
" <td>1</td>\n",
" <td>视觉效果的极限是视觉疲劳</td>\n",
" </tr>\n",
" <tr>\n",
" <th>12</th>\n",
" <td>感觉有略黑暗了点,不过还是萌点满满,但是一想到就要完结了又心碎了一地,,,,</td>\n",
" <td>1</td>\n",
" <td>感觉有略黑暗了点 不过还是萌点满满 但是一想到就要完结了又心碎了一地</td>\n",
" </tr>\n",
" <tr>\n",
" <th>13</th>\n",
" <td>妇联成员都只会讲不好笑的笑话,唯一加分的是朱莉·德培</td>\n",
" <td>0</td>\n",
" <td>妇联成员都只会讲不好笑的笑话 唯一加分的是朱莉 德培</td>\n",
" </tr>\n",
" <tr>\n",
" <th>14</th>\n",
" <td>只算還OK的商業片。現在這類片型第一品牌就是漫威了,熱鬧打鬥大場面,人神機甲齊飛,各型超級...</td>\n",
" <td>1</td>\n",
" <td>只算還 的商業片 現在這類片型第一品牌就是漫威了 熱鬧打鬥大場面 人神機甲齊飛 各型超級英雄...</td>\n",
" </tr>\n",
" <tr>\n",
" <th>15</th>\n",
" <td>好看!好看!好看!</td>\n",
" <td>1</td>\n",
" <td>好看 好看 好看</td>\n",
" </tr>\n",
" <tr>\n",
" <th>16</th>\n",
" <td>难看一笔</td>\n",
" <td>0</td>\n",
" <td>难看一笔</td>\n",
" </tr>\n",
" <tr>\n",
" <th>17</th>\n",
" <td>6/10。第一部精准的节奏、巧妙的悬念和清楚的内心戏不见了,或许导演不想把超级英雄打造成战...</td>\n",
" <td>1</td>\n",
" <td>第一部精准的节奏 巧妙的悬念和清楚的内心戏不见了 或许导演不想把超级英雄打造成战斗机器 所以...</td>\n",
" </tr>\n",
" <tr>\n",
" <th>18</th>\n",
" <td>欧洲竟然真的是最早上映啊= =法国比美国还早一周……没怎么看懂的我想找科普说明都不容易!嘛...</td>\n",
" <td>1</td>\n",
" <td>欧洲竟然真的是最早上映啊 法国比美国还早一周 没怎么看懂的我想找科普说明都不容易 嘛 我...</td>\n",
" </tr>\n",
" <tr>\n",
" <th>19</th>\n",
" <td>我是美队的忠实脑!残!粉!!!!!!!!!</td>\n",
" <td>1</td>\n",
" <td>我是美队的忠实脑 残 粉</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" Comment Star \\\n",
"0 连奥创都知道整容要去韩国。 1 \n",
"1 “一个没有黑暗面的人不值得信任。” 第二部剥去冗长的铺垫,开场即高潮、一直到结束,会有人觉... 1 \n",
"2 奥创弱爆了弱爆了弱爆了啊!!!!!! 0 \n",
"3 与第一集不同,承上启下,阴郁严肃,但也不会不好看啊,除非本来就不喜欢漫威电影。场面更加宏大... 1 \n",
"4 看毕,我激动地对友人说,等等奥创要来毁灭台北怎么办厚,她拍了拍我肩膀,没事,反正你买了两份... 1 \n",
"5 绝逼不质疑尾灯的导演和编剧水平 1 \n",
"6 avengers1睡着1次 avengers2睡着两次。。。 0 \n",
"7 谁再喊我看这种电影我和谁急!实在是接受无能。。。 0 \n",
"8 超愉悦以及超满足。在历经了第一阶段比漫画更普世的设定融合之后,发展到#AoU#居然出现了不... 1 \n",
"9 观影过程中,耳边一直有一种突突突突突的声音,我还感慨电影为了让奥创给观众带来紧张感,声音上... 1 \n",
"10 Long takes, no stakes. 最后大战灾难性得乱 olsen到底什么能力完... 1 \n",
"11 视觉效果的极限是视觉疲劳 1 \n",
"12 感觉有略黑暗了点,不过还是萌点满满,但是一想到就要完结了又心碎了一地,,,, 1 \n",
"13 妇联成员都只会讲不好笑的笑话,唯一加分的是朱莉·德培 0 \n",
"14 只算還OK的商業片。現在這類片型第一品牌就是漫威了,熱鬧打鬥大場面,人神機甲齊飛,各型超級... 1 \n",
"15 好看!好看!好看! 1 \n",
"16 难看一笔 0 \n",
"17 6/10。第一部精准的节奏、巧妙的悬念和清楚的内心戏不见了,或许导演不想把超级英雄打造成战... 1 \n",
"18 欧洲竟然真的是最早上映啊= =法国比美国还早一周……没怎么看懂的我想找科普说明都不容易!嘛... 1 \n",
"19 我是美队的忠实脑!残!粉!!!!!!!!! 1 \n",
"\n",
" comment_clean \n",
"0 连奥创都知道整容要去韩国 \n",
"1 一个没有黑暗面的人不值得信任 第二部剥去冗长的铺垫 开场即高潮 一直到结束 会有人觉得只... \n",
"2 奥创弱爆了弱爆了弱爆了啊 \n",
"3 与第一集不同 承上启下 阴郁严肃 但也不会不好看啊 除非本来就不喜欢漫威电影 场面更加宏大 ... \n",
"4 看毕 我激动地对友人说 等等奥创要来毁灭台北怎么办厚 她拍了拍我肩膀 没事 反正你买了两份旅... \n",
"5 绝逼不质疑尾灯的导演和编剧水平 \n",
"6 睡着 次 睡着两次 \n",
"7 谁再喊我看这种电影我和谁急 实在是接受无能 \n",
"8 超愉悦以及超满足 在历经了第一阶段比漫画更普世的设定融合之后 发展到 居然出现了不少传统... \n",
"9 观影过程中 耳边一直有一种突突突突突的声音 我还感慨电影为了让奥创给观众带来紧张感 声音上真... \n",
"10 最后大战灾难性得乱 到底什么能力完全没明白 是巴菲里的 其实剧本没那么差 美国例外论的主... \n",
"11 视觉效果的极限是视觉疲劳 \n",
"12 感觉有略黑暗了点 不过还是萌点满满 但是一想到就要完结了又心碎了一地 \n",
"13 妇联成员都只会讲不好笑的笑话 唯一加分的是朱莉 德培 \n",
"14 只算還 的商業片 現在這類片型第一品牌就是漫威了 熱鬧打鬥大場面 人神機甲齊飛 各型超級英雄... \n",
"15 好看 好看 好看 \n",
"16 难看一笔 \n",
"17 第一部精准的节奏 巧妙的悬念和清楚的内心戏不见了 或许导演不想把超级英雄打造成战斗机器 所以... \n",
"18 欧洲竟然真的是最早上映啊 法国比美国还早一周 没怎么看懂的我想找科普说明都不容易 嘛 我... \n",
"19 我是美队的忠实脑 残 粉 "
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# TODO1: 去掉一些无用的字符,自行定一个字符几何,并从文本中去掉\n",
"def pre_process(input_str):\n",
" # input_str = re.sub('[0-9]+', 'DIG', input_str)\n",
" # 去除标点符号\n",
" # input_str = re.sub(r\"[{}]+\".format(punc), \" \", input_str)\n",
" \n",
" input_str = re.sub(\n",
" \"[0-9a-zA-Z\\-\\s+\\.\\!\\/_,$%^*\\(\\)\\+(+\\\"\\')]+|[+——!,。?、~@#¥%……&*()<>\\[\\]::★◆【】《》;;=??]+\", \" \", input_str)\n",
" # 其他非中文字符\n",
" input_str = re.sub(r\"[^\\u4e00-\\u9fff]\", \" \", input_str)\n",
" return input_str.strip()\n",
"\n",
"# 正则去除标点符号\n",
"data['comment_clean'] = data['Comment'].apply(pre_process)\n",
"data.head(20)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 任务2:使用结巴分词对文本做分词"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"apply: 0%| | 0/212506 [00:00<?, ?it/s]Building prefix dict from the default dictionary ...\n",
"Loading model from cache C:\\Users\\avaws\\AppData\\Local\\Temp\\jieba.cache\n",
"Loading model cost 0.630 seconds.\n",
"Prefix dict has been built successfully.\n",
"apply: 100%|█████████████████████████████████████████████████████████████████| 212506/212506 [00:35<00:00, 5973.38it/s]\n"
]
},
{
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>Comment</th>\n",
" <th>Star</th>\n",
" <th>comment_clean</th>\n",
" <th>comment_processed</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>连奥创都知道整容要去韩国。</td>\n",
" <td>1</td>\n",
" <td>连奥创都知道整容要去韩国</td>\n",
" <td>连 奥创 都 知道 整容 要 去 韩国</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>“一个没有黑暗面的人不值得信任。” 第二部剥去冗长的铺垫,开场即高潮、一直到结束,会有人觉...</td>\n",
" <td>1</td>\n",
" <td>一个没有黑暗面的人不值得信任 第二部剥去冗长的铺垫 开场即高潮 一直到结束 会有人觉得只...</td>\n",
" <td>一个 没有 黑暗面 的 人 不 值得 信任 第二部 剥去 冗长 的 铺垫 开...</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>奥创弱爆了弱爆了弱爆了啊!!!!!!</td>\n",
" <td>0</td>\n",
" <td>奥创弱爆了弱爆了弱爆了啊</td>\n",
" <td>奥创 弱 爆 了 弱 爆 了 弱 爆 了 啊</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>与第一集不同,承上启下,阴郁严肃,但也不会不好看啊,除非本来就不喜欢漫威电影。场面更加宏大...</td>\n",
" <td>1</td>\n",
" <td>与第一集不同 承上启下 阴郁严肃 但也不会不好看啊 除非本来就不喜欢漫威电影 场面更加宏大 ...</td>\n",
" <td>与 第一集 不同 承上启下 阴郁 严肃 但 也 不会 不 好看 啊 除非 本...</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4</th>\n",
" <td>看毕,我激动地对友人说,等等奥创要来毁灭台北怎么办厚,她拍了拍我肩膀,没事,反正你买了两份...</td>\n",
" <td>1</td>\n",
" <td>看毕 我激动地对友人说 等等奥创要来毁灭台北怎么办厚 她拍了拍我肩膀 没事 反正你买了两份旅...</td>\n",
" <td>看毕 我 激动 地 对 友人 说 等等 奥创 要 来 毁灭 台北 怎么办 厚 她...</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" Comment Star \\\n",
"0 连奥创都知道整容要去韩国。 1 \n",
"1 “一个没有黑暗面的人不值得信任。” 第二部剥去冗长的铺垫,开场即高潮、一直到结束,会有人觉... 1 \n",
"2 奥创弱爆了弱爆了弱爆了啊!!!!!! 0 \n",
"3 与第一集不同,承上启下,阴郁严肃,但也不会不好看啊,除非本来就不喜欢漫威电影。场面更加宏大... 1 \n",
"4 看毕,我激动地对友人说,等等奥创要来毁灭台北怎么办厚,她拍了拍我肩膀,没事,反正你买了两份... 1 \n",
"\n",
" comment_clean \\\n",
"0 连奥创都知道整容要去韩国 \n",
"1 一个没有黑暗面的人不值得信任 第二部剥去冗长的铺垫 开场即高潮 一直到结束 会有人觉得只... \n",
"2 奥创弱爆了弱爆了弱爆了啊 \n",
"3 与第一集不同 承上启下 阴郁严肃 但也不会不好看啊 除非本来就不喜欢漫威电影 场面更加宏大 ... \n",
"4 看毕 我激动地对友人说 等等奥创要来毁灭台北怎么办厚 她拍了拍我肩膀 没事 反正你买了两份旅... \n",
"\n",
" comment_processed \n",
"0 连 奥创 都 知道 整容 要 去 韩国 \n",
"1 一个 没有 黑暗面 的 人 不 值得 信任 第二部 剥去 冗长 的 铺垫 开... \n",
"2 奥创 弱 爆 了 弱 爆 了 弱 爆 了 啊 \n",
"3 与 第一集 不同 承上启下 阴郁 严肃 但 也 不会 不 好看 啊 除非 本... \n",
"4 看毕 我 激动 地 对 友人 说 等等 奥创 要 来 毁灭 台北 怎么办 厚 她... "
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# TODO2: 导入中文分词包jieba, 并用jieba对原始文本做分词\n",
"import jieba\n",
"def comment_cut(content):\n",
" # TODO: 使用结巴完成对每一个comment的分词\n",
" # 分词并过滤空字符串\n",
" return ' '.join([w for w in jieba.lcut(content.strip()) if len(w) > 0])\n",
"\n",
"# 输出进度条\n",
"tqdm.pandas(desc='apply')\n",
"data['comment_processed'] = data['comment_clean'].progress_apply(comment_cut)\n",
"\n",
"# 观察新的数据的格式\n",
"data.head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 任务3:设定停用词并去掉停用词"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"apply: 100%|█████████████████████████████████████████████████████████████████| 212506/212506 [00:31<00:00, 6837.77it/s]\n"
]
}
],
"source": [
"# TODO3: 设定停用词并从文本中去掉停用词\n",
"\n",
"# 下载中文停用词表至data/stopWord.json中,下载地址:https://github.com/goto456/stopwords/\n",
"if not os.path.exists('./data/stopWord.json'):\n",
" stopWord = requests.get(\"https://raw.githubusercontent.com/goto456/stopwords/master/cn_stopwords.txt\")\n",
" with open(\"./data/stopWord.json\", \"wb\") as f:\n",
" f.write(stopWord.content)\n",
"\n",
"# 读取下载的停用词表,并保存在列表中\n",
"with open(\"./data/stopWord.json\",\"r\", encoding=\"utf-8\") as f:\n",
" stopWords = f.read().split(\"\\n\") \n",
" \n",
" \n",
"# 去除停用词\n",
"def rm_stop_word(input_str):\n",
" # your code, remove stop words\n",
" # TODO\n",
" return [w for w in input_str.split() if w not in stopWords]\n",
"\n",
"#这行代码中.progress_apply()函数的作用等同于.apply()函数的作用,只是写成.progress_apply()函数才能被tqdm包监控从而输出进度条。\n",
"data['comment_processed'] = data['comment_processed'].progress_apply(rm_stop_word)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>Comment</th>\n",
" <th>Star</th>\n",
" <th>comment_clean</th>\n",
" <th>comment_processed</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>连奥创都知道整容要去韩国。</td>\n",
" <td>1</td>\n",
" <td>连奥创都知道整容要去韩国</td>\n",
" <td>[奥创, 知道, 整容, 韩国]</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>“一个没有黑暗面的人不值得信任。” 第二部剥去冗长的铺垫,开场即高潮、一直到结束,会有人觉...</td>\n",
" <td>1</td>\n",
" <td>一个没有黑暗面的人不值得信任 第二部剥去冗长的铺垫 开场即高潮 一直到结束 会有人觉得只...</td>\n",
" <td>[一个, 没有, 黑暗面, 值得, 信任, 第二部, 剥去, 冗长, 铺垫, 开场, 高潮,...</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>奥创弱爆了弱爆了弱爆了啊!!!!!!</td>\n",
" <td>0</td>\n",
" <td>奥创弱爆了弱爆了弱爆了啊</td>\n",
" <td>[奥创, 弱, 爆, 弱, 爆, 弱, 爆]</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>与第一集不同,承上启下,阴郁严肃,但也不会不好看啊,除非本来就不喜欢漫威电影。场面更加宏大...</td>\n",
" <td>1</td>\n",
" <td>与第一集不同 承上启下 阴郁严肃 但也不会不好看啊 除非本来就不喜欢漫威电影 场面更加宏大 ...</td>\n",
" <td>[第一集, 不同, 承上启下, 阴郁, 严肃, 不会, 好看, 本来, 喜欢, 漫威, 电影...</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4</th>\n",
" <td>看毕,我激动地对友人说,等等奥创要来毁灭台北怎么办厚,她拍了拍我肩膀,没事,反正你买了两份...</td>\n",
" <td>1</td>\n",
" <td>看毕 我激动地对友人说 等等奥创要来毁灭台北怎么办厚 她拍了拍我肩膀 没事 反正你买了两份旅...</td>\n",
" <td>[看毕, 激动, 友人, 说, 奥创, 毁灭, 台北, 厚, 拍了拍, 肩膀, 没事, 反正...</td>\n",
" </tr>\n",
" <tr>\n",
" <th>5</th>\n",
" <td>绝逼不质疑尾灯的导演和编剧水平</td>\n",
" <td>1</td>\n",
" <td>绝逼不质疑尾灯的导演和编剧水平</td>\n",
" <td>[绝逼, 质疑, 尾灯, 导演, 编剧, 水平]</td>\n",
" </tr>\n",
" <tr>\n",
" <th>6</th>\n",
" <td>avengers1睡着1次 avengers2睡着两次。。。</td>\n",
" <td>0</td>\n",
" <td>睡着 次 睡着两次</td>\n",
" <td>[睡着, 次, 睡着, 两次]</td>\n",
" </tr>\n",
" <tr>\n",
" <th>7</th>\n",
" <td>谁再喊我看这种电影我和谁急!实在是接受无能。。。</td>\n",
" <td>0</td>\n",
" <td>谁再喊我看这种电影我和谁急 实在是接受无能</td>\n",
" <td>[喊, 这种, 电影, 急, 实在, 接受, 无能]</td>\n",
" </tr>\n",
" <tr>\n",
" <th>8</th>\n",
" <td>超愉悦以及超满足。在历经了第一阶段比漫画更普世的设定融合之后,发展到#AoU#居然出现了不...</td>\n",
" <td>1</td>\n",
" <td>超愉悦以及超满足 在历经了第一阶段比漫画更普世的设定融合之后 发展到 居然出现了不少传统...</td>\n",
" <td>[超, 愉悦, 超, 满足, 历经, 第一阶段, 漫画, 更普世, 设定, 融合, 之后, ...</td>\n",
" </tr>\n",
" <tr>\n",
" <th>9</th>\n",
" <td>观影过程中,耳边一直有一种突突突突突的声音,我还感慨电影为了让奥创给观众带来紧张感,声音上...</td>\n",
" <td>1</td>\n",
" <td>观影过程中 耳边一直有一种突突突突突的声音 我还感慨电影为了让奥创给观众带来紧张感 声音上真...</td>\n",
" <td>[观影, 过程, 中, 耳边, 一直, 一种, 突突突, 突突, 声音, 感慨, 电影, 奥...</td>\n",
" </tr>\n",
" <tr>\n",
" <th>10</th>\n",
" <td>Long takes, no stakes. 最后大战灾难性得乱 olsen到底什么能力完...</td>\n",
" <td>1</td>\n",
" <td>最后大战灾难性得乱 到底什么能力完全没明白 是巴菲里的 其实剧本没那么差 美国例外论的主...</td>\n",
" <td>[最后, 大战, 灾难性, 得乱, 到底, 能力, 完全, 没, 明白, 巴菲, 里, 其实...</td>\n",
" </tr>\n",
" <tr>\n",
" <th>11</th>\n",
" <td>视觉效果的极限是视觉疲劳</td>\n",
" <td>1</td>\n",
" <td>视觉效果的极限是视觉疲劳</td>\n",
" <td>[视觉效果, 极限, 视觉, 疲劳]</td>\n",
" </tr>\n",
" <tr>\n",
" <th>12</th>\n",
" <td>感觉有略黑暗了点,不过还是萌点满满,但是一想到就要完结了又心碎了一地,,,,</td>\n",
" <td>1</td>\n",
" <td>感觉有略黑暗了点 不过还是萌点满满 但是一想到就要完结了又心碎了一地</td>\n",
" <td>[感觉, 有略, 黑暗, 点, 萌点, 满满, 想到, 完结, 心碎, 一地]</td>\n",
" </tr>\n",
" <tr>\n",
" <th>13</th>\n",
" <td>妇联成员都只会讲不好笑的笑话,唯一加分的是朱莉·德培</td>\n",
" <td>0</td>\n",
" <td>妇联成员都只会讲不好笑的笑话 唯一加分的是朱莉 德培</td>\n",
" <td>[妇联, 成员, 只会, 讲, 不好, 笑, 笑话, 唯一, 加分, 朱莉, 德培]</td>\n",
" </tr>\n",
" <tr>\n",
" <th>14</th>\n",
" <td>只算還OK的商業片。現在這類片型第一品牌就是漫威了,熱鬧打鬥大場面,人神機甲齊飛,各型超級...</td>\n",
" <td>1</td>\n",
" <td>只算還 的商業片 現在這類片型第一品牌就是漫威了 熱鬧打鬥大場面 人神機甲齊飛 各型超級英雄...</td>\n",
" <td>[只算還, 商業片, 現在, 這類, 片型, 第一, 品牌, 漫威, 熱鬧, 打鬥大場, 面...</td>\n",
" </tr>\n",
" <tr>\n",
" <th>15</th>\n",
" <td>好看!好看!好看!</td>\n",
" <td>1</td>\n",
" <td>好看 好看 好看</td>\n",
" <td>[好看, 好看, 好看]</td>\n",
" </tr>\n",
" <tr>\n",
" <th>16</th>\n",
" <td>难看一笔</td>\n",
" <td>0</td>\n",
" <td>难看一笔</td>\n",
" <td>[难看, 一笔]</td>\n",
" </tr>\n",
" <tr>\n",
" <th>17</th>\n",
" <td>6/10。第一部精准的节奏、巧妙的悬念和清楚的内心戏不见了,或许导演不想把超级英雄打造成战...</td>\n",
" <td>1</td>\n",
" <td>第一部精准的节奏 巧妙的悬念和清楚的内心戏不见了 或许导演不想把超级英雄打造成战斗机器 所以...</td>\n",
" <td>[第一部, 精准, 节奏, 巧妙, 悬念, 清楚, 内心, 戏, 不见, 或许, 导演, 不...</td>\n",
" </tr>\n",
" <tr>\n",
" <th>18</th>\n",
" <td>欧洲竟然真的是最早上映啊= =法国比美国还早一周……没怎么看懂的我想找科普说明都不容易!嘛...</td>\n",
" <td>1</td>\n",
" <td>欧洲竟然真的是最早上映啊 法国比美国还早一周 没怎么看懂的我想找科普说明都不容易 嘛 我...</td>\n",
" <td>[欧洲, 竟然, 真的, 最早, 上映, 法国, 美国, 早, 一周, 没, 懂, 想, 找...</td>\n",
" </tr>\n",
" <tr>\n",
" <th>19</th>\n",
" <td>我是美队的忠实脑!残!粉!!!!!!!!!</td>\n",
" <td>1</td>\n",
" <td>我是美队的忠实脑 残 粉</td>\n",
" <td>[美队, 忠实, 脑, 残, 粉]</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" Comment Star \\\n",
"0 连奥创都知道整容要去韩国。 1 \n",
"1 “一个没有黑暗面的人不值得信任。” 第二部剥去冗长的铺垫,开场即高潮、一直到结束,会有人觉... 1 \n",
"2 奥创弱爆了弱爆了弱爆了啊!!!!!! 0 \n",
"3 与第一集不同,承上启下,阴郁严肃,但也不会不好看啊,除非本来就不喜欢漫威电影。场面更加宏大... 1 \n",
"4 看毕,我激动地对友人说,等等奥创要来毁灭台北怎么办厚,她拍了拍我肩膀,没事,反正你买了两份... 1 \n",
"5 绝逼不质疑尾灯的导演和编剧水平 1 \n",
"6 avengers1睡着1次 avengers2睡着两次。。。 0 \n",
"7 谁再喊我看这种电影我和谁急!实在是接受无能。。。 0 \n",
"8 超愉悦以及超满足。在历经了第一阶段比漫画更普世的设定融合之后,发展到#AoU#居然出现了不... 1 \n",
"9 观影过程中,耳边一直有一种突突突突突的声音,我还感慨电影为了让奥创给观众带来紧张感,声音上... 1 \n",
"10 Long takes, no stakes. 最后大战灾难性得乱 olsen到底什么能力完... 1 \n",
"11 视觉效果的极限是视觉疲劳 1 \n",
"12 感觉有略黑暗了点,不过还是萌点满满,但是一想到就要完结了又心碎了一地,,,, 1 \n",
"13 妇联成员都只会讲不好笑的笑话,唯一加分的是朱莉·德培 0 \n",
"14 只算還OK的商業片。現在這類片型第一品牌就是漫威了,熱鬧打鬥大場面,人神機甲齊飛,各型超級... 1 \n",
"15 好看!好看!好看! 1 \n",
"16 难看一笔 0 \n",
"17 6/10。第一部精准的节奏、巧妙的悬念和清楚的内心戏不见了,或许导演不想把超级英雄打造成战... 1 \n",
"18 欧洲竟然真的是最早上映啊= =法国比美国还早一周……没怎么看懂的我想找科普说明都不容易!嘛... 1 \n",
"19 我是美队的忠实脑!残!粉!!!!!!!!! 1 \n",
"\n",
" comment_clean \\\n",
"0 连奥创都知道整容要去韩国 \n",
"1 一个没有黑暗面的人不值得信任 第二部剥去冗长的铺垫 开场即高潮 一直到结束 会有人觉得只... \n",
"2 奥创弱爆了弱爆了弱爆了啊 \n",
"3 与第一集不同 承上启下 阴郁严肃 但也不会不好看啊 除非本来就不喜欢漫威电影 场面更加宏大 ... \n",
"4 看毕 我激动地对友人说 等等奥创要来毁灭台北怎么办厚 她拍了拍我肩膀 没事 反正你买了两份旅... \n",
"5 绝逼不质疑尾灯的导演和编剧水平 \n",
"6 睡着 次 睡着两次 \n",
"7 谁再喊我看这种电影我和谁急 实在是接受无能 \n",
"8 超愉悦以及超满足 在历经了第一阶段比漫画更普世的设定融合之后 发展到 居然出现了不少传统... \n",
"9 观影过程中 耳边一直有一种突突突突突的声音 我还感慨电影为了让奥创给观众带来紧张感 声音上真... \n",
"10 最后大战灾难性得乱 到底什么能力完全没明白 是巴菲里的 其实剧本没那么差 美国例外论的主... \n",
"11 视觉效果的极限是视觉疲劳 \n",
"12 感觉有略黑暗了点 不过还是萌点满满 但是一想到就要完结了又心碎了一地 \n",
"13 妇联成员都只会讲不好笑的笑话 唯一加分的是朱莉 德培 \n",
"14 只算還 的商業片 現在這類片型第一品牌就是漫威了 熱鬧打鬥大場面 人神機甲齊飛 各型超級英雄... \n",
"15 好看 好看 好看 \n",
"16 难看一笔 \n",
"17 第一部精准的节奏 巧妙的悬念和清楚的内心戏不见了 或许导演不想把超级英雄打造成战斗机器 所以... \n",
"18 欧洲竟然真的是最早上映啊 法国比美国还早一周 没怎么看懂的我想找科普说明都不容易 嘛 我... \n",
"19 我是美队的忠实脑 残 粉 \n",
"\n",
" comment_processed \n",
"0 [奥创, 知道, 整容, 韩国] \n",
"1 [一个, 没有, 黑暗面, 值得, 信任, 第二部, 剥去, 冗长, 铺垫, 开场, 高潮,... \n",
"2 [奥创, 弱, 爆, 弱, 爆, 弱, 爆] \n",
"3 [第一集, 不同, 承上启下, 阴郁, 严肃, 不会, 好看, 本来, 喜欢, 漫威, 电影... \n",
"4 [看毕, 激动, 友人, 说, 奥创, 毁灭, 台北, 厚, 拍了拍, 肩膀, 没事, 反正... \n",
"5 [绝逼, 质疑, 尾灯, 导演, 编剧, 水平] \n",
"6 [睡着, 次, 睡着, 两次] \n",
"7 [喊, 这种, 电影, 急, 实在, 接受, 无能] \n",
"8 [超, 愉悦, 超, 满足, 历经, 第一阶段, 漫画, 更普世, 设定, 融合, 之后, ... \n",
"9 [观影, 过程, 中, 耳边, 一直, 一种, 突突突, 突突, 声音, 感慨, 电影, 奥... \n",
"10 [最后, 大战, 灾难性, 得乱, 到底, 能力, 完全, 没, 明白, 巴菲, 里, 其实... \n",
"11 [视觉效果, 极限, 视觉, 疲劳] \n",
"12 [感觉, 有略, 黑暗, 点, 萌点, 满满, 想到, 完结, 心碎, 一地] \n",
"13 [妇联, 成员, 只会, 讲, 不好, 笑, 笑话, 唯一, 加分, 朱莉, 德培] \n",
"14 [只算還, 商業片, 現在, 這類, 片型, 第一, 品牌, 漫威, 熱鬧, 打鬥大場, 面... \n",
"15 [好看, 好看, 好看] \n",
"16 [难看, 一笔] \n",
"17 [第一部, 精准, 节奏, 巧妙, 悬念, 清楚, 内心, 戏, 不见, 或许, 导演, 不... \n",
"18 [欧洲, 竟然, 真的, 最早, 上映, 法国, 美国, 早, 一周, 没, 懂, 想, 找... \n",
"19 [美队, 忠实, 脑, 残, 粉] "
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"data.head(20)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 任务4:去掉低频词,出现次数少于10次的词去掉"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"apply: 100%|███████████████████████████████████████████████████████████████| 212506/212506 [00:00<00:00, 227439.46it/s]\n"
]
},
{
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>Comment</th>\n",
" <th>Star</th>\n",
" <th>comment_clean</th>\n",
" <th>comment_processed</th>\n",
" <th>comment_processed_str</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>连奥创都知道整容要去韩国。</td>\n",
" <td>1</td>\n",
" <td>连奥创都知道整容要去韩国</td>\n",
" <td>[奥创, 知道, 整容, 韩国]</td>\n",
" <td>奥创 知道 整容 韩国</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>“一个没有黑暗面的人不值得信任。” 第二部剥去冗长的铺垫,开场即高潮、一直到结束,会有人觉...</td>\n",
" <td>1</td>\n",
" <td>一个没有黑暗面的人不值得信任 第二部剥去冗长的铺垫 开场即高潮 一直到结束 会有人觉得只...</td>\n",
" <td>[一个, 没有, 黑暗面, 值得, 信任, 第二部, 冗长, 铺垫, 开场, 高潮, 一直,...</td>\n",
" <td>一个 没有 黑暗面 值得 信任 第二部 冗长 铺垫 开场 高潮 一直 结束 会 有人 觉得 ...</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>奥创弱爆了弱爆了弱爆了啊!!!!!!</td>\n",
" <td>0</td>\n",
" <td>奥创弱爆了弱爆了弱爆了啊</td>\n",
" <td>[奥创, 弱, 爆, 弱, 爆, 弱, 爆]</td>\n",
" <td>奥创 弱 爆 弱 爆 弱 爆</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>与第一集不同,承上启下,阴郁严肃,但也不会不好看啊,除非本来就不喜欢漫威电影。场面更加宏大...</td>\n",
" <td>1</td>\n",
" <td>与第一集不同 承上启下 阴郁严肃 但也不会不好看啊 除非本来就不喜欢漫威电影 场面更加宏大 ...</td>\n",
" <td>[第一集, 不同, 承上启下, 阴郁, 严肃, 不会, 好看, 本来, 喜欢, 漫威, 电影...</td>\n",
" <td>第一集 不同 承上启下 阴郁 严肃 不会 好看 本来 喜欢 漫威 电影 场面 更加 宏大 团...</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4</th>\n",
" <td>看毕,我激动地对友人说,等等奥创要来毁灭台北怎么办厚,她拍了拍我肩膀,没事,反正你买了两份...</td>\n",
" <td>1</td>\n",
" <td>看毕 我激动地对友人说 等等奥创要来毁灭台北怎么办厚 她拍了拍我肩膀 没事 反正你买了两份旅...</td>\n",
" <td>[激动, 友人, 说, 奥创, 毁灭, 台北, 厚, 肩膀, 没事, 反正, 买, 两份, ...</td>\n",
" <td>激动 友人 说 奥创 毁灭 台北 厚 肩膀 没事 反正 买 两份 旅行 惹</td>\n",
" </tr>\n",
" <tr>\n",
" <th>5</th>\n",
" <td>绝逼不质疑尾灯的导演和编剧水平</td>\n",
" <td>1</td>\n",
" <td>绝逼不质疑尾灯的导演和编剧水平</td>\n",
" <td>[绝逼, 质疑, 尾灯, 导演, 编剧, 水平]</td>\n",
" <td>绝逼 质疑 尾灯 导演 编剧 水平</td>\n",
" </tr>\n",
" <tr>\n",
" <th>6</th>\n",
" <td>avengers1睡着1次 avengers2睡着两次。。。</td>\n",
" <td>0</td>\n",
" <td>睡着 次 睡着两次</td>\n",
" <td>[睡着, 次, 睡着, 两次]</td>\n",
" <td>睡着 次 睡着 两次</td>\n",
" </tr>\n",
" <tr>\n",
" <th>7</th>\n",
" <td>谁再喊我看这种电影我和谁急!实在是接受无能。。。</td>\n",
" <td>0</td>\n",
" <td>谁再喊我看这种电影我和谁急 实在是接受无能</td>\n",
" <td>[喊, 这种, 电影, 急, 实在, 接受, 无能]</td>\n",
" <td>喊 这种 电影 急 实在 接受 无能</td>\n",
" </tr>\n",
" <tr>\n",
" <th>8</th>\n",
" <td>超愉悦以及超满足。在历经了第一阶段比漫画更普世的设定融合之后,发展到#AoU#居然出现了不...</td>\n",
" <td>1</td>\n",
" <td>超愉悦以及超满足 在历经了第一阶段比漫画更普世的设定融合之后 发展到 居然出现了不少传统...</td>\n",
" <td>[超, 愉悦, 超, 满足, 历经, 漫画, 设定, 融合, 之后, 发展, 居然, 出现,...</td>\n",
" <td>超 愉悦 超 满足 历经 漫画 设定 融合 之后 发展 居然 出现 不少 传统 科幻 尾灯 ...</td>\n",
" </tr>\n",
" <tr>\n",
" <th>9</th>\n",
" <td>观影过程中,耳边一直有一种突突突突突的声音,我还感慨电影为了让奥创给观众带来紧张感,声音上...</td>\n",
" <td>1</td>\n",
" <td>观影过程中 耳边一直有一种突突突突突的声音 我还感慨电影为了让奥创给观众带来紧张感 声音上真...</td>\n",
" <td>[观影, 过程, 中, 耳边, 一直, 一种, 突突突, 声音, 感慨, 电影, 奥创, 观...</td>\n",
" <td>观影 过程 中 耳边 一直 一种 突突突 声音 感慨 电影 奥创 观众 带来 紧张感 声音 ...</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" Comment Star \\\n",
"0 连奥创都知道整容要去韩国。 1 \n",
"1 “一个没有黑暗面的人不值得信任。” 第二部剥去冗长的铺垫,开场即高潮、一直到结束,会有人觉... 1 \n",
"2 奥创弱爆了弱爆了弱爆了啊!!!!!! 0 \n",
"3 与第一集不同,承上启下,阴郁严肃,但也不会不好看啊,除非本来就不喜欢漫威电影。场面更加宏大... 1 \n",
"4 看毕,我激动地对友人说,等等奥创要来毁灭台北怎么办厚,她拍了拍我肩膀,没事,反正你买了两份... 1 \n",
"5 绝逼不质疑尾灯的导演和编剧水平 1 \n",
"6 avengers1睡着1次 avengers2睡着两次。。。 0 \n",
"7 谁再喊我看这种电影我和谁急!实在是接受无能。。。 0 \n",
"8 超愉悦以及超满足。在历经了第一阶段比漫画更普世的设定融合之后,发展到#AoU#居然出现了不... 1 \n",
"9 观影过程中,耳边一直有一种突突突突突的声音,我还感慨电影为了让奥创给观众带来紧张感,声音上... 1 \n",
"\n",
" comment_clean \\\n",
"0 连奥创都知道整容要去韩国 \n",
"1 一个没有黑暗面的人不值得信任 第二部剥去冗长的铺垫 开场即高潮 一直到结束 会有人觉得只... \n",
"2 奥创弱爆了弱爆了弱爆了啊 \n",
"3 与第一集不同 承上启下 阴郁严肃 但也不会不好看啊 除非本来就不喜欢漫威电影 场面更加宏大 ... \n",
"4 看毕 我激动地对友人说 等等奥创要来毁灭台北怎么办厚 她拍了拍我肩膀 没事 反正你买了两份旅... \n",
"5 绝逼不质疑尾灯的导演和编剧水平 \n",
"6 睡着 次 睡着两次 \n",
"7 谁再喊我看这种电影我和谁急 实在是接受无能 \n",
"8 超愉悦以及超满足 在历经了第一阶段比漫画更普世的设定融合之后 发展到 居然出现了不少传统... \n",
"9 观影过程中 耳边一直有一种突突突突突的声音 我还感慨电影为了让奥创给观众带来紧张感 声音上真... \n",
"\n",
" comment_processed \\\n",
"0 [奥创, 知道, 整容, 韩国] \n",
"1 [一个, 没有, 黑暗面, 值得, 信任, 第二部, 冗长, 铺垫, 开场, 高潮, 一直,... \n",
"2 [奥创, 弱, 爆, 弱, 爆, 弱, 爆] \n",
"3 [第一集, 不同, 承上启下, 阴郁, 严肃, 不会, 好看, 本来, 喜欢, 漫威, 电影... \n",
"4 [激动, 友人, 说, 奥创, 毁灭, 台北, 厚, 肩膀, 没事, 反正, 买, 两份, ... \n",
"5 [绝逼, 质疑, 尾灯, 导演, 编剧, 水平] \n",
"6 [睡着, 次, 睡着, 两次] \n",
"7 [喊, 这种, 电影, 急, 实在, 接受, 无能] \n",
"8 [超, 愉悦, 超, 满足, 历经, 漫画, 设定, 融合, 之后, 发展, 居然, 出现,... \n",
"9 [观影, 过程, 中, 耳边, 一直, 一种, 突突突, 声音, 感慨, 电影, 奥创, 观... \n",
"\n",
" comment_processed_str \n",
"0 奥创 知道 整容 韩国 \n",
"1 一个 没有 黑暗面 值得 信任 第二部 冗长 铺垫 开场 高潮 一直 结束 会 有人 觉得 ... \n",
"2 奥创 弱 爆 弱 爆 弱 爆 \n",
"3 第一集 不同 承上启下 阴郁 严肃 不会 好看 本来 喜欢 漫威 电影 场面 更加 宏大 团... \n",
"4 激动 友人 说 奥创 毁灭 台北 厚 肩膀 没事 反正 买 两份 旅行 惹 \n",
"5 绝逼 质疑 尾灯 导演 编剧 水平 \n",
"6 睡着 次 睡着 两次 \n",
"7 喊 这种 电影 急 实在 接受 无能 \n",
"8 超 愉悦 超 满足 历经 漫画 设定 融合 之后 发展 居然 出现 不少 传统 科幻 尾灯 ... \n",
"9 观影 过程 中 耳边 一直 一种 突突突 声音 感慨 电影 奥创 观众 带来 紧张感 声音 ... "
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# TODO4: 去除低频词, 去掉词频小于10的单词,并把结果存放在data['comment_processed']里\n",
"word_counter = Counter([w for s in data['comment_processed'].values for w in s])\n",
"\n",
"\n",
"def rm_low_frequency_words(word_list):\n",
" return [w for w in word_list if word_counter[w] >= 10]\n",
"\n",
"data['comment_processed'] = data['comment_processed'].progress_apply(rm_low_frequency_words)\n",
"data['comment_processed_str'] = data['comment_processed'].apply(lambda x: ' '.join(x))\n",
"data.head(10)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 2. 把文本分为训练集和测试集\n",
"选择语料库中的20%作为测试数据,剩下的作为训练数据"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
"# TODO5: 把数据分为训练集和测试集. comments_train(list)保存用于训练的文本,comments_test(list)保存用于测试的文本。 y_train, y_test是对应的标签(0、1)\n",
"\n",
"from sklearn.model_selection import train_test_split\n",
"\n",
"test_ratio = 0.2\n",
"\n",
"# https://machinelearningmastery.com/train-test-split-for-evaluating-machine-learning-algorithms/\n",
"src_training, src_testing = train_test_split(data, test_size=test_ratio, stratify=data['Star'])\n",
"\n",
"comments_train, comments_test = src_training['comment_processed_str'].values, src_testing['comment_processed_str'].values\n",
"y_train, y_test = src_training['Star'].values, src_testing['Star'].values"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 3. 把文本转换成向量的形式\n",
"\n",
"在这个部分我们会采用三种不同的方式:\n",
"- 使用tf-idf向量\n",
"- 使用word2vec\n",
"- 使用bert向量\n",
"\n",
"转换成向量之后,我们接着做模型的训练"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 任务6:把文本转换成tf-idf向量"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"(170004, 15763) (42502, 15763)\n"
]
}
],
"source": [
"# TODO6: 把训练文本和测试文本转换成tf-idf向量。使用sklearn的feature_extraction.text.TfidfTransformer模块\n",
"# 请留意fit_transform和transform之间的区别。 常见的错误是在训练集和测试集上都使用 fit_transform,需要避免! \n",
"# 另外,可以留意一下结果是否为稀疏矩阵\n",
"\n",
"from sklearn.feature_extraction.text import CountVectorizer\n",
"from sklearn.feature_extraction.text import TfidfTransformer\n",
"\n",
"count_vectorizer = CountVectorizer(token_pattern=r\"(?u)\\b\\w+\\b\")\n",
"tfidf_transformer = TfidfTransformer()\n",
"\n",
"word_count_train = count_vectorizer.fit_transform(comments_train)\n",
"tfidf_train = tfidf_transformer.fit_transform(word_count_train)\n",
"\n",
"word_count_test = count_vectorizer.transform(comments_test)\n",
"tfidf_test = tfidf_transformer.transform(word_count_test)\n",
"\n",
"print(tfidf_train.shape, tfidf_test.shape)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 任务7:把文本转换成word2vec向量"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [],
"source": [
"# 由于训练出一个高效的word2vec词向量往往需要非常大的语料库与计算资源,所以我们通常不自己训练Wordvec词向量,而直接使用网上开源的已训练好的词向量。\n",
"# data/sgns.zhihu.word是从https://github.com/Embedding/Chinese-Word-Vectors下载到的预训练好的中文词向量文件\n",
"# 使用KeyedVectors.load_word2vec_format()函数加载预训练好的词向量文件\n",
"model = KeyedVectors.load_word2vec_format('data/sgns.zhihu.word')"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([-0.200708, 0.188213, -0.20941 , 0.048857, 0.116663, 0.547244,\n",
" -0.449441, -0.177554, 0.123547, 0.161301, -0.20861 , 0.429821,\n",
" -0.429595, -0.45094 , 0.190053, 0.175438, 0.066855, -0.157346,\n",
" 0.134905, -0.128076, 0.111503, -0.03149 , -0.347445, -0.231517,\n",
" 0.212383, 0.29857 , 0.167368, -0.064022, -0.048241, 0.109434,\n",
" -0.156835, -0.558394, -0.005307, 0.127788, -0.053521, -0.154787,\n",
" -0.048875, 0.109031, 0.160019, 0.273365, -0.023131, -0.257962,\n",
" -0.051904, 0.103058, 0.019103, 0.210418, -0.12053 , 0.084021,\n",
" 0.085243, -0.406479, -0.285062, -0.229883, -0.125173, -0.141597,\n",
" -0.018101, -0.215311, -0.091788, 0.315358, 0.242912, 0.013785,\n",
" -0.078914, 0.158206, 0.180421, -0.050306, -0.008539, -0.201157,\n",
" 0.047753, 0.293518, 0.340344, 0.098132, 0.356952, 0.189959,\n",
" -0.107122, -0.176698, 0.011044, 0.131703, 0.134601, -0.078891,\n",
" 0.217989, 0.05074 , 0.063365, 0.30178 , 0.161369, 0.157998,\n",
" -0.128195, -0.060345, 0.047446, -0.146161, 0.005427, -0.06684 ,\n",
" 0.056229, -0.04922 , -0.122368, 0.181634, 0.180599, 0.026725,\n",
" -0.383503, -0.10855 , 0.06524 , -0.095767, 0.08362 , 0.287755,\n",
" -0.325982, -0.026982, 0.147817, 0.041374, 0.342181, -0.010403,\n",
" -0.082642, 0.124128, -0.104747, 0.002654, -0.086981, -0.044065,\n",
" -0.085694, -0.020068, -0.125195, -0.154542, -0.030115, 0.100488,\n",
" 0.081022, 0.06612 , 0.088058, -0.102289, -0.061927, -0.054882,\n",
" 0.510755, -0.154545, 0.029478, -0.191885, -0.048633, -0.218267,\n",
" -0.14659 , -0.028195, 0.223698, 0.101008, 0.100562, -0.237451,\n",
" 0.492519, -0.163208, -0.466598, 0.041121, 0.153394, 0.066931,\n",
" 0.428429, 0.238117, 0.188347, 0.290581, 0.147405, -0.222624,\n",
" 0.336171, -0.128802, 0.032038, 0.036617, 0.042459, 0.031089,\n",
" 0.092689, 0.092509, -0.206014, -0.093757, -0.079919, 0.052213,\n",
" 0.176261, 0.030587, -0.222407, -0.293368, -0.210982, 0.086169,\n",
" -0.41054 , 0.168664, -0.110555, 0.104398, 0.131111, 0.034967,\n",
" -0.240558, 0.050963, 0.002297, -0.231932, 0.138751, -0.162152,\n",
" 0.128286, 0.11232 , 0.085235, 0.16869 , 0.072754, 0.004705,\n",
" -0.175828, -0.082598, -0.245999, 0.103419, 0.357173, -0.05588 ,\n",
" 0.030934, -0.13984 , 0.011164, -0.277783, -0.168691, -0.223155,\n",
" -0.203391, -0.015567, 0.161146, -0.110572, -0.06779 , -0.006586,\n",
" -0.039414, 0.245169, -0.182014, 0.38548 , 0.039947, 0.36978 ,\n",
" 0.167039, -0.055724, 0.051462, 0.044205, -0.255853, -0.194969,\n",
" -0.215543, 0.367193, -0.268322, 0.048425, 0.181398, 0.203609,\n",
" 0.04321 , -0.280908, 0.215055, -0.410717, 0.209178, 0.365696,\n",
" -0.26421 , 0.008008, -0.167048, 0.07082 , 0.148507, -0.121757,\n",
" -0.227046, -0.161108, -0.084349, 0.173502, 0.07519 , -0.203567,\n",
" 0.151776, -0.21104 , -0.334659, 0.090743, 0.049097, 0.080783,\n",
" -0.062416, -0.089825, 0.230757, -0.065472, 0.313976, 0.096314,\n",
" -0.145926, 0.146772, -0.007169, -0.041627, -0.050497, -0.34267 ,\n",
" -0.144144, -0.140267, 0.000677, -0.114036, -0.017044, -0.030107,\n",
" -0.098467, -0.233114, 0.103173, 0.093112, -0.11863 , 0.086859,\n",
" 0.300346, 0.146062, -0.173922, 0.162061, 0.143895, -0.158726,\n",
" -0.123311, 0.166061, -0.196121, 0.207249, 0.053585, 0.025314,\n",
" -0.24309 , -0.074694, -0.238774, -0.056441, -0.099747, -0.271508,\n",
" 0.212461, 0.189918, 0.162701, -0.154819, 0.235821, -0.131372,\n",
" -0.052284, 0.101817, 0.088172, 0.107883, 0.020072, 0.188443],\n",
" dtype=float32)"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"#预训练词向量使用举例\n",
"model['我们']"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Word is: 太\n",
"in vocab with first dimension: 0.014244000427424908 and first_dim_sum is: 0.014244000427424908\n",
"Word is: 羡慕\n",
"in vocab with first dimension: -0.15045799314975739 and first_dim_sum is: -0.13621399272233248\n",
"Word is: 睡醒\n",
"in vocab with first dimension: -0.24956999719142914 and first_dim_sum is: -0.3857839899137616\n",
"Word is: 自带\n",
"in vocab with first dimension: -0.011017999611794949 and first_dim_sum is: -0.39680198952555656\n",
"Word is: 眼线\n",
"in vocab with first dimension: 0.5926219820976257 and first_dim_sum is: 0.19581999257206917\n",
"Word is: 功能\n",
"in vocab with first dimension: -0.23343400657176971 and first_dim_sum is: -0.037614013999700546\n",
"word_num is: 6, first dim average is: -0.006269002333283424\n"
]
},
{
"data": {
"text/plain": [
"array([-0.006269 , 0.49181566, -0.03168183, 0.04983784, 0.06673867,\n",
" 0.335808 , -0.15324783, 0.13205217, -0.187487 , -0.06973083,\n",
" -0.33892885, 0.00712617, -0.04528883, -0.35241985, -0.25597516,\n",
" 0.17179199, -0.15410483, 0.25453883, 0.04631083, 0.00468217,\n",
" 0.05664301, 0.208089 , -0.14274767, -0.02742232, -0.03486684,\n",
" 0.20949216, 0.06042317, -0.24423634, 0.04138667, 0.21450649,\n",
" -0.19060284, -0.18335716, 0.02216783, 0.16279984, 0.48065415,\n",
" 0.181997 , 0.08256483, -0.0141175 , 0.0303005 , -0.10158599,\n",
" -0.13615434, -0.05390099, 0.0034065 , 0.04335551, 0.13684501,\n",
" 0.04543083, 0.001553 , -0.20554483, -0.22932017, -0.2425495 ,\n",
" -0.07273617, -0.22611235, -0.20762551, -0.00542116, -0.03923317,\n",
" 0.01394333, 0.06697384, 0.07175667, 0.01097817, 0.10089815,\n",
" 0.19776817, -0.23033567, -0.00759733, -0.003969 , 0.16955216,\n",
" 0.04057483, 0.13872884, 0.13213433, 0.03250884, -0.0223265 ,\n",
" 0.40901634, 0.11547834, -0.28505036, 0.014259 , 0.15724249,\n",
" 0.04512767, -0.11240651, 0.0085975 , -0.09461749, -0.15875368,\n",
" -0.42937502, 0.1967647 , -0.167874 , -0.13948801, -0.0713935 ,\n",
" -0.24380215, 0.0166915 , -0.10906667, 0.11994666, 0.14009634,\n",
" 0.07170583, 0.20213133, -0.06602767, 0.10403249, 0.09893334,\n",
" 0.11362684, -0.15490116, -0.0008545 , 0.00886033, -0.20169084,\n",
" -0.06920484, 0.340556 , 0.06646351, -0.10229701, 0.21381085,\n",
" -0.12904866, 0.09576433, -0.04434667, -0.01670217, -0.0874575 ,\n",
" 0.04633834, -0.09038868, 0.21476734, 0.06342583, 0.0866585 ,\n",
" 0.01539184, -0.24206935, 0.0392975 , -0.10134933, -0.06606951,\n",
" 0.280585 , -0.05947768, 0.24122365, -0.06553616, 0.16132265,\n",
" 0.17330217, 0.12740934, -0.00664983, -0.08104451, 0.20668782,\n",
" 0.121468 , 0.08478551, -0.03720683, 0.02036533, 0.070995 ,\n",
" -0.21309084, 0.26116082, -0.37298015, 0.13490233, 0.11869783,\n",
" 0.0371005 , -0.10614616, 0.18191002, 0.12976883, 0.2616267 ,\n",
" -0.12406898, -0.16117118, 0.12877066, -0.1553865 , 0.10904866,\n",
" -0.09247132, 0.12956016, 0.13495584, 0.09278933, -0.29651916,\n",
" 0.20360716, 0.099833 , -0.01783117, -0.06679617, 0.01231216,\n",
" 0.188987 , -0.0117375 , -0.08387617, -0.03082917, -0.11206567,\n",
" -0.13265468, -0.04277933, 0.27157086, -0.18449984, 0.27238747,\n",
" -0.07431716, -0.05397001, 0.02042533, -0.19138734, 0.0134245 ,\n",
" -0.08939617, 0.1561025 , 0.06543782, 0.046698 , 0.02843284,\n",
" 0.11977883, 0.15135516, -0.08608434, 0.09061084, -0.1356525 ,\n",
" -0.03282601, 0.04795817, -0.04158567, -0.04836716, -0.09820333,\n",
" 0.3472228 , -0.12267167, 0.03923183, -0.18316533, 0.11225 ,\n",
" -0.17679133, -0.0680245 , 0.14407717, 0.107478 , 0.01064617,\n",
" -0.1513785 , -0.2381355 , 0.01563917, 0.10676351, -0.09361733,\n",
" 0.13431932, 0.12316317, 0.16359682, -0.13519734, -0.09500366,\n",
" 0.14091365, 0.14162616, 0.03102633, 0.10469216, -0.17308533,\n",
" -0.39692616, 0.1269405 , 0.169883 , -0.1719905 , -0.07911801,\n",
" -0.10372651, 0.13604984, -0.05316551, -0.4384433 , -0.06366532,\n",
" -0.12293318, 0.2317065 , 0.28796482, 0.26968583, 0.16583967,\n",
" 0.05252817, 0.15405333, 0.06897517, -0.15944766, -0.105216 ,\n",
" -0.025306 , 0.30037683, 0.30300233, 0.06399766, 0.10287201,\n",
" 0.0032095 , -0.1588355 , 0.04603817, 0.01326967, -0.04889333,\n",
" -0.0645775 , 0.00328116, 0.12878667, 0.0882395 , -0.09495318,\n",
" 0.34225833, 0.07591516, 0.26686716, 0.122855 , -0.00167983,\n",
" -0.24565983, -0.22329152, -0.09808967, 0.00235601, -0.07280833,\n",
" 0.01587083, -0.12102517, -0.16746451, 0.0295955 , 0.1357425 ,\n",
" -0.19811799, 0.06344417, 0.03116834, -0.15141399, -0.05643583,\n",
" -0.10892133, -0.05933183, -0.07419883, -0.1634795 , -0.12360232,\n",
" 0.04137383, 0.09551833, 0.08709151, -0.27779886, 0.06205567,\n",
" 0.07474118, -0.166826 , 0.0069045 , -0.05632783, 0.15675949,\n",
" 0.197645 , 0.26591933, -0.008814 , 0.03002167, 0.056109 ,\n",
" -0.02320532, 0.15731184, -0.06499717, -0.255752 , -0.19852583,\n",
" 0.17784333, -0.0243395 , 0.01665667, 0.2156315 , -0.10841651],\n",
" dtype=float32)"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"vocabulary = model.vocab\n",
"\n",
"def word_vec_averaging(words, dim=300):\n",
" \"\"\"\n",
" Average all words vectors in one sentence.\n",
" :param words: input sentence\n",
" :param dim: 'size' of model\n",
" :return: the averaged word vectors as the vector for the sentence\n",
" \"\"\"\n",
" vec_mean = np.zeros((dim,), dtype=np.float32)\n",
" word_num = 0\n",
" first_dim_sum = 0\n",
" for word in words:\n",
" print(f'Word is: {word}')\n",
" if word in vocabulary:\n",
" word_num += 1\n",
" vec_mean = np.add(vec_mean, model[word])\n",
" first_dim_sum += model[word][0]\n",
" print(f'in vocab with first dimension: {model[word][0]} and first_dim_sum is: {first_dim_sum}')\n",
" else:\n",
" print('not in vocab')\n",
" if word_num > 0:\n",
" vec_mean = np.divide(vec_mean, word_num)\n",
" print(f'word_num is: {word_num}, first dim average is: {first_dim_sum / word_num}')\n",
" return vec_mean\n",
"\n",
"one_sample = comments_train[100]\n",
"word_vec_averaging(one_sample.split())"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'三星 韩寒 朴树 片儿 到底 情况 影评 过来 一星'"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"comments_train[3]"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"(170004, 300) (42502, 300)\n"
]
}
],
"source": [
"vocabulary = model.vocab\n",
"\n",
"def word_vec_averaging(words, dim=300):\n",
" \"\"\"\n",
" Average all words vectors in one sentence.\n",
" :param words: input sentence\n",
" :param dim: 'size' of model\n",
" :return: the averaged word vectors as the vector for the sentence\n",
" \"\"\"\n",
" vec_mean = np.zeros((dim,), dtype=np.float32)\n",
" word_num = 0\n",
" first_dim_sum = 0\n",
" for word in words:\n",
" if word in vocabulary:\n",
" word_num += 1\n",
" vec_mean = np.add(vec_mean, model[word])\n",
" first_dim_sum += model[word][0]\n",
" if word_num > 0:\n",
" vec_mean = np.divide(vec_mean, word_num)\n",
" return vec_mean\n",
"\n",
"word2vec_train = np.array([word_vec_averaging(s.split()) for s in comments_train])\n",
"word2vec_test = np.array([word_vec_averaging(s.split()) for s in comments_test])\n",
"print(word2vec_train.shape, word2vec_test.shape)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 任务8:把文本转换成bert向量"
]
},
{
"cell_type": "code",
"execution_count": 39,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"(34, 768) (8, 768)\n"
]
}
],
"source": [
"# 导入gpu版本的bert embedding预训练的模型。\n",
"# 若没有gpu,则ctx可使用其默认值cpu(0)。但使用cpu会使程序运行的时间变得非常慢\n",
"# 若之前没有下载过bert embedding预训练的模型,执行此句时会花费一些时间来下载预训练的模型\n",
"ctx = mxnet.cpu()\n",
"embedding = BertEmbedding(ctx=ctx)\n",
"\n",
"# TODO8: 跟word2vec一样,计算出训练文本和测试文本的向量,仍然采用单词向量的平均。\n",
"def bert_embedding_averaging(sentence):\n",
" \"\"\"返回sentence bert 句向量\"\"\"\n",
" tokens, token_embeddings = embedding([sentence])[0]\n",
" return np.mean(np.array(token_embeddings), axis=0).astype(np.float32)\n",
"bert_train = np.array([bert_embedding_averaging(s) for s in comments_train[:len(comments_train)//K]])\n",
"bert_test = np.array([bert_embedding_averaging(s) for s in comments_test[:len(comments_test)//K]])\n",
"print (bert_train.shape, bert_test.shape)"
]
},
{
"cell_type": "code",
"execution_count": 40,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"(170004, 15763) (42502, 15763)\n",
"(170004, 300) (42502, 300)\n",
"(34, 768) (8, 768)\n"
]
}
],
"source": [
"print (tfidf_train.shape, tfidf_test.shape)\n",
"print (word2vec_train.shape, word2vec_test.shape)\n",
"print (bert_train.shape, bert_test.shape)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 4. 训练模型以及评估\n",
"对如上三种不同的向量表示法,分别训练逻辑回归模型,需要做:\n",
"- 搭建模型\n",
"- 训练模型(并做交叉验证)\n",
"- 输出最好的结果"
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [],
"source": [
"# 导入逻辑回归的包\n",
"from sklearn.linear_model import LogisticRegression"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 任务9:使用tf-idf,并结合逻辑回归训练模型"
]
},
{
"cell_type": "code",
"execution_count": 46,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Best parameters: {'C': 1, 'class_weight': None}\n",
"TF-IDF LR test accuracy 0.8766175709378382\n",
"TF-IDF LR test F1_score 0.7365217068731438\n"
]
}
],
"source": [
"# TODO9: 使用tf-idf + 逻辑回归训练模型,需要用gridsearchCV做交叉验证,并选择最好的超参数\n",
"clf = LogisticRegression()\n",
"\n",
"from sklearn.model_selection import GridSearchCV\n",
"\n",
"search_grid = {\n",
" 'C': [0.01, 1, 10, 100],\n",
" 'class_weight': [None, 'balanced']\n",
"}\n",
"\n",
"grid_search = GridSearchCV(estimator = clf, \n",
" param_grid = search_grid, \n",
" cv = 55, \n",
" n_jobs=-1, \n",
" scoring='accuracy')\n",
"\n",
"grid_result = grid_search.fit(tfidf_train, y_train)\n",
"print(f'Best parameters: {grid_result.best_params_}')\n",
"\n",
"clf.fit(tfidf_train, y_train)\n",
"tf_idf_y_pred = clf.predict(tfidf_test)\n",
"print('TF-IDF LR test accuracy %s' % metrics.accuracy_score(y_test, tf_idf_y_pred))\n",
"#逻辑回归模型在测试集上的F1_Score\n",
"print('TF-IDF LR test F1_score %s' % metrics.f1_score(y_test, tf_idf_y_pred,average=\"macro\"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 任务10:使用word2vec,并结合逻辑回归训练模型"
]
},
{
"cell_type": "code",
"execution_count": 45,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Best parameters: {'C': 100, 'class_weight': None}\n",
"Word2vec LR test accuracy 0.848618888522893\n",
"Word2vec LR test F1_score 0.6377919282595982\n"
]
}
],
"source": [
"# TODO10: 使用word2vec + 逻辑回归训练模型,需要用gridsearchCV做交叉验证,并选择最好的超参数\n",
"clf = LogisticRegression()\n",
"\n",
"from sklearn.model_selection import GridSearchCV\n",
"\n",
"search_grid = {\n",
" 'C': [0.01, 1, 10, 100],\n",
" 'class_weight': [None, 'balanced']\n",
"}\n",
"\n",
"grid_search = GridSearchCV(estimator = clf, \n",
" param_grid = search_grid, \n",
" cv = 5, \n",
" n_jobs=-1, \n",
" scoring='accuracy')\n",
"\n",
"grid_result = grid_search.fit(word2vec_train, y_train)\n",
"print(f'Best parameters: {grid_result.best_params_}')\n",
"clf.fit(word2vec_train, y_train)\n",
"word2vec_y_pred = clf.predict(word2vec_test)\n",
"print('Word2vec LR test accuracy %s' % metrics.accuracy_score(y_test, word2vec_y_pred))\n",
"#逻辑回归模型在测试集上的F1_Score\n",
"print('Word2vec LR test F1_score %s' % metrics.f1_score(y_test, word2vec_y_pred,average=\"macro\"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 任务11:使用bert,并结合逻辑回归训练模型"
]
},
{
"cell_type": "code",
"execution_count": 47,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Best parameters: {'C': 0.01, 'class_weight': None}\n",
"Bert LR test accuracy 0.625\n",
"Bert LR test F1_score 0.38461538461538464\n"
]
}
],
"source": [
"# TODO11: 使用bert + 逻辑回归训练模型,需要用gridsearchCV做交叉验证,并选择最好的超参数\n",
"clf = LogisticRegression()\n",
"\n",
"from sklearn.model_selection import GridSearchCV\n",
"\n",
"search_grid = {\n",
" 'C': [0.01, 1, 10, 100],\n",
" 'class_weight': [None, 'balanced']\n",
"}\n",
"\n",
"grid_search = GridSearchCV(estimator = clf, \n",
" param_grid = search_grid, \n",
" cv = 5, \n",
" n_jobs=-1, \n",
" scoring='accuracy')\n",
"\n",
"grid_result = grid_search.fit(bert_train, y_train[:len(y_train)//K])\n",
"print(f'Best parameters: {grid_result.best_params_}')\n",
"clf.fit(bert_train, y_train[:len(y_train)//K])\n",
"bert_y_pred = lr.predict(bert_test)\n",
"print('Bert LR test accuracy %s' % metrics.accuracy_score(y_test[:len(y_test)//K], bert_y_pred))\n",
"#逻辑回归模型在测试集上的F1_Score\n",
"print('Bert LR test F1_score %s' % metrics.f1_score(y_test[:len(y_test)//K], bert_y_pred,average=\"macro\"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 任务12:对于以上结果请做一下简单的总结,按照1,2,3,4提取几个关键点,包括:\n",
"- 结果说明什么问题?\n",
"- 接下来如何提高?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"1.预训练的词向量和bert模型的好坏对最终结果产生直接影响。如果预训练语料和下游任务不匹配,效果可能会很差。\n",
"2.词向量的维度对结果也会产生影响,但是维度并不是越大越好。维度越大,可能会导致噪声变大。\n",
"3.本次作业使用的库都比较老,导致在我自己机器声运行是无法使用GPU加速。所以训练bert向量时值用了一小部分数据。\n",
"4.对于bert来说使用finetune方法会有所提升。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.13"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment