{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## 搭建一个分词工具\n", "\n", "### Part 1 基于枚举方法来搭建中文分词工具\n", "\n", "此项目需要的数据:\n", "1. 综合类中文词库.xlsx: 包含了中文词,当做词典来用\n", "2. 以变量的方式提供了部分unigram概率 word_prob\n", "\n", "\n", "举个例子: 给定词典=[我们 学习 人工 智能 人工智能 未来 是], 另外我们给定unigram概率:p(我们)=0.25, p(学习)=0.15, p(人工)=0.05, p(智能)=0.1, p(人工智能)=0.2, p(未来)=0.1, p(是)=0.15\n", "\n", "#### Step 1: 对于给定字符串:”我们学习人工智能,人工智能是未来“, 找出所有可能的分割方式\n", "- [我们,学习,人工智能,人工智能,是,未来]\n", "- [我们,学习,人工,智能,人工智能,是,未来]\n", "- [我们,学习,人工,智能,人工,智能,是,未来]\n", "- [我们,学习,人工智能,人工,智能,是,未来]\n", ".......\n", "\n", "\n", "#### Step 2: 我们也可以计算出每一个切分之后句子的概率\n", "- p(我们,学习,人工智能,人工智能,是,未来)= -log p(我们)-log p(学习)-log p(人工智能)-log p(人工智能)-log p(是)-log p(未来)\n", "- p(我们,学习,人工,智能,人工智能,是,未来)=-log p(我们)-log p(学习)-log p(人工)-log p(智能)-log p(人工智能)-log p(是)-log p(未来)\n", "- p(我们,学习,人工,智能,人工,智能,是,未来)=-log p(我们)-log p(学习)-log p(人工)-log p(智能)-log p(人工)-log p(智能)-log p(是)-log p(未来)\n", "- p(我们,学习,人工智能,人工,智能,是,未来)=-log p(我们)-log p(学习)-log p(人工智能)-log p(人工)-log p(智能)-log(是)-log p(未来)\n", ".....\n", "\n", "#### Step 3: 返回第二步中概率最大的结果" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "词典长度: 298032\n", "1.0000000000000002\n" ] } ], "source": [ "# TODO: 第一步: 从综合类中文词库.xlsx 中读取所有中文词。\n", "# hint: 思考一下用什么数据结构来存储这个词典会比较好? 要考虑我们每次查询一个单词的效率。 \n", "\n", "import pandas as pd\n", "from collections import defaultdict\n", "\n", "df = pd.read_excel('综合类中文词库.xlsx', header=None)\n", "\n", "dic_words = defaultdict(int)\n", "for i in df[0].tolist():\n", " dic_words[i] += 1 # 保存词典库中读取的单词\n", "print(f'词典长度: {len(dic_words)}')\n", "\n", "# 以下是每一个单词出现的概率。为了问题的简化,我们只列出了一小部分单词的概率。 在这里没有出现的的单词但是出现在词典里的,统一把概率设置成为0.00001\n", "# 比如 p(\"学院\")=p(\"概率\")=...0.00001\n", "\n", "word_prob = {\"北京\":0.03,\"的\":0.08,\"天\":0.005,\"气\":0.005,\"天气\":0.06,\"真\":0.04,\"好\":0.05,\"真好\":0.04,\"啊\":0.01,\"真好啊\":0.02, \n", " \"今\":0.01,\"今天\":0.07,\"课程\":0.06,\"内容\":0.06,\"有\":0.05,\"很\":0.03,\"很有\":0.04,\"意思\":0.06,\"有意思\":0.005,\"课\":0.01,\n", " \"程\":0.005,\"经常\":0.08,\"意见\":0.08,\"意\":0.01,\"见\":0.005,\"有意见\":0.02,\"分歧\":0.04,\"分\":0.02, \"歧\":0.005}\n", "\n", "print (sum(word_prob.values()))" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "def word_break(s, dic):\n", " def sentences(cur):\n", " result = []\n", " if cur < len(s):\n", " for next in range(cur + 1, len(s) + 1):\n", " if s[cur:next] in dic: #列表切片是前闭后开的过程\n", " result = result + [s[cur:next] + (tail and ',' + tail) for tail in sentences(next)]\n", " else:\n", " return ['']\n", " return result\n", "\n", " list_new = []\n", " for line in sentences(0):\n", " line = line.split(\",\")\n", " list_new.append(line)\n", " return list_new" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "import math\n", "def get_best_segments(segments):\n", " scores = []\n", " for segment in segments:\n", " score = 0\n", " for word in segment:\n", " prob = word_prob[word] if word in word_prob else 0.00001\n", " score -= math.log(prob)\n", " scores.append(score)\n", " index = scores.index(min(scores))\n", " return segments[index]" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "## TODO 请编写word_segment_naive函数来实现对输入字符串的分词\n", "def word_segment_naive(input_str):\n", " \"\"\"\n", " 1. 对于输入字符串做分词,并返回所有可行的分词之后的结果。\n", " 2. 针对于每一个返回结果,计算句子的概率\n", " 3. 返回概率最高的最作为最后结果\n", " \n", " input_str: 输入字符串 输入格式:“今天天气好”\n", " best_segment: 最好的分词结果 输出格式:[\"今天\",\"天气\",\"好\"]\n", " \"\"\"\n", "\n", " # TODO: 第一步: 计算所有可能的分词结果,要保证每个分完的词存在于词典里,这个结果有可能会非常多。 \n", " segments = word_break(input_str, dic_words) # 存储所有分词的结果。如果次字符串不可能被完全切分,则返回空列表(list)\n", " # 格式为:segments = [[\"今天\",“天气”,“好”],[\"今天\",“天“,”气”,“好”],[\"今“,”天\",“天气”,“好”],...]\n", " \n", " # TODO: 第二步:循环所有的分词结果,并计算出概率最高(?)的分词结果,并返回\n", " best_segment = get_best_segments(segments)\n", " return best_segment " ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "['北京', '的', '天气', '真好', '啊']\n", "['今天', '的', '课程', '内容', '很', '有意思']\n", "['经常', '有', '意见', '分歧']\n" ] } ], "source": [ "# 测试\n", "print (word_segment_naive(\"北京的天气真好啊\"))\n", "print (word_segment_naive(\"今天的课程内容很有意思\"))\n", "print (word_segment_naive(\"经常有意见分歧\"))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Part 2 基于维特比算法来优化上述流程\n", "\n", "此项目需要的数据:\n", "1. 综合类中文词库.xlsx: 包含了中文词,当做词典来用\n", "2. 以变量的方式提供了部分unigram概率word_prob\n", "\n", "\n", "举个例子: 给定词典=[我们 学习 人工 智能 人工智能 未来 是], 另外我们给定unigram概率:p(我们)=0.25, p(学习)=0.15, p(人工)=0.05, p(智能)=0.1, p(人工智能)=0.2, p(未来)=0.1, p(是)=0.15\n", "\n", "#### Step 1: 根据词典,输入的句子和 word_prob来创建带权重的有向图(Directed Graph) 参考:课程内容\n", "有向图的每一条边是一个单词的概率(只要存在于词典里的都可以作为一个合法的单词),这些概率已经给出(存放在word_prob)。\n", "注意:思考用什么方式来存储这种有向图比较合适? 不一定只有一种方式来存储这种结构。 \n", "\n", "#### Step 2: 编写维特比算法(viterebi)算法来找出其中最好的PATH, 也就是最好的句子切分\n", "具体算法参考课程中讲过的内容\n", "\n", "#### Step 3: 返回结果\n", "跟PART 1的要求一致" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "def DAG(sentence):\n", " DAG = {} #DAG空字典,用来构建DAG有向无环图\n", " N = len(sentence)\n", " for k in range(N):\n", " tmplist = [] \n", " i = k\n", " frag = sentence[k]\n", " while i < N:\n", " if frag in word_prob:\n", " tmplist.append(i) \n", " i += 1 \n", " frag = sentence[k:i + 1] \n", " if not tmplist:\n", " tmplist.append(k)\n", " DAG[k] = tmplist\n", " return DAG" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "## TODO 请编写word_segment_viterbi函数来实现对输入字符串的分词\n", "def word_segment_viterbi(input_str):\n", " \"\"\"\n", " 1. 基于输入字符串,词典,以及给定的unigram概率来创建DAG(有向图)。\n", " 2. 编写维特比算法来寻找最优的PATH\n", " 3. 返回分词结果\n", " \n", " input_str: 输入字符串 输入格式:“今天天气好”\n", " best_segment: 最好的分词结果 输出格式:[\"今天\",\"天气\",\"好\"]\n", " \"\"\"\n", " \n", " # TODO: 第一步:根据词典,输入的句子,以及给定的unigram概率来创建带权重的有向图(Directed Graph) 参考:课程内容\n", " # 有向图的每一条边是一个单词的概率(只要存在于词典里的都可以作为一个合法的单词),这些概率在 word_prob,如果不在word_prob里的单词但在\n", " # 词典里存在的,统一用概率值0.00001。\n", " # 注意:思考用什么方式来存储这种有向图比较合适? 不一定有只有一种方式来存储这种结构。 \n", " N=len(sentence)\n", " graph = DAG(sentence)\n", " \n", " # TODO: 第二步: 利用维特比算法来找出最好的PATH, 这个PATH是P(sentence)最大或者 -log P(sentence)最小的PATH。\n", " # hint: 思考为什么不用相乘: p(w1)p(w2)...而是使用negative log sum: -log(w1)-log(w2)-...\n", " route={}\n", " route[N] = (0, 0)\n", " \n", " for idx in range(N - 1, -1, -1):\n", " distance = (((word_prob.get(sentence[idx:x + 1]) or 0) + route[x + 1][0], x) for x in dag[idx])\n", " route[idx] = max(distance)\n", " \n", " # TODO: 第三步: 根据最好的PATH, 返回最好的切分\n", " x = 0\n", " best_segment = []\n", " while x < N:\n", " y = route[x][1] + 1\n", " word = sentence[x:y]\n", " best_segment.append(word)\n", " x = y\n", "\n", " return best_segment " ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "['北京', '的', '天气', '真好', '啊']\n", "['今天', '的', '课程', '内容', '很', '有意思']\n", "['经常', '有', '意见', '分歧']\n" ] } ], "source": [ "# 测试\n", "print (word_segment_naive(\"北京的天气真好啊\"))\n", "print (word_segment_naive(\"今天的课程内容很有意思\"))\n", "print (word_segment_naive(\"经常有意见分歧\"))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# TODO: 第一种方法和第二种方法的时间复杂度和空间复杂度分别是多少?\n", "第一个方法: \n", "时间复杂度= , 空间复杂度=\n", "\n", "第二个方法:\n", "时间复杂度= , 空间复杂度=" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# TODO:如果把上述的分词工具持续优化,有哪些可以考虑的方法? (至少列出3点)\n", "- 0. (例), 目前的概率是不完整的,可以考虑大量的语料库,然后从中计算出每一个词出现的概率,这样更加真实\n", "- 1.\n", "- 2.\n", "- 3. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.3" } }, "nbformat": 4, "nbformat_minor": 4 }