概述
從今天開始我們將開啟一段自然語言處理 (NLP) 的旅程. 自然語言處理可以讓來處理, 理解, 以及運用人類的語言, 實現機器語言和人類語言之間的溝通橋梁.
分詞器 jieba
jieba 算法基于前綴詞典實現高效的詞圖掃描, 生成句子中漢字所有可能成詞的情況所構成的有向無環圖. 通過動態規劃查找最大概率路徑, 找出基于詞頻的最大切分組合. 對于未登錄詞采用了基于漢字成詞能力的 HMM 模型, 使用 Viterbi 算法.
安裝
pip install jieba
查看是否安裝成功:
import jieba print(jieba.__version__)
輸出結果:
0.42.1
精確分詞
精確分詞: 精確模式試圖將句子最精確地切開, 精確分詞也是默認分詞.
格式:
jieba.cut(content, cut_all=False)
參數:
- content: 需要分詞的內容
- cut_all: 如果為 True 則為全模式, False 為精確模式
例子:
import jieba # 定義文本 content = "自然語言處理是人工智能和語言學領域的分支學科。此領域探討如何處理及運用自然語言;自然語言處理包括多方面和步驟,基本有認知、理解、生成等部分。" # 精確分詞 seg = jieba.cut(content, cut_all=False) # 調試輸出 print([word for word in seg])
輸出結果:
Building prefix dict from the default dictionary ... Loading model from cache C:UsersWindowsAppDataLocalTempjieba.cache Loading model cost 0.984 seconds. Prefix dict has been built successfully. ["自然語言", "處理", "是", "人工智能", "和", "語言學", "領域", "的", "分支", "學科", "。", "此", "領域", "探討", "如何", "處理", "及", "運用", "自然語言", ";", "自然語言", "處理", "包括", "多方面", "和", "步驟", ",", "基本", "有", "認知", "、", "理解", "、", "生成", "等", "部分", "。"]
全模式
全模式分詞: 全模式會把句子中所有可能是詞語的都掃出來. 速度非常快, 但不能解決歧義問題.
例子:
C:UsersWindowsAnaconda3pythonw.exe "C:/Users/Windows/Desktop/project/NLP 基礎/結巴.py" Building prefix dict from the default dictionary ... Loading model from cache C:UsersWindowsAppDataLocalTempjieba.cache ["自然", "自然語言", "語言", "處理", "是", "人工", "人工智能", "智能", "和", "語言", "語言學", "領域", "的", "分支", "學科", "。", "此", "領域", "探討", "如何", "何處", "處理", "及", "運用", "自然", "自然語言", "語言", ";", "自然", "自然語言", "語言", "處理", "包括", "多方", "多方面", "方面", "和", "步驟", ",", "基本", "有", "認知", "、", "理解", "、", "生成", "等", "部分", "。"] Loading model cost 0.999 seconds. Prefix dict has been built successfully.
輸出結果:
Building prefix dict from the default dictionary ... Loading model from cache C:UsersWindowsAppDataLocalTempjieba.cache ["自然", "自然語言", "語言", "處理", "是", "人工", "人工智能", "智能", "和", "語言", "語言學", "領域", "的", "分支", "學科", "。", "此", "領域", "探討", "如何", "何處", "處理", "及", "運用", "自然", "自然語言", "語言", ";", "自然", "自然語言", "語言", "處理", "包括", "多方", "多方面", "方面", "和", "步驟", ",", "基本", "有", "認知", "、", "理解", "、", "生成", "等", "部分", "。"] Loading model cost 0.999 seconds. Prefix dict has been built successfully.
搜索引擎模式
搜索引擎模式: 在精確模式的基礎上, 對長詞再次切分. 提高召回率, 適合用于搜索引擎分詞.
例子:
import jieba # 定義文本 content = "自然語言處理是人工智能和語言學領域的分支學科。此領域探討如何處理及運用自然語言;自然語言處理包括多方面和步驟,基本有認知、理解、生成等部分。" # 搜索引擎模式 seg = jieba.cut_for_search(content) # 調試輸出 print([word for word in seg])
輸出結果:
Building prefix dict from the default dictionary ... Loading model from cache C:UsersWindowsAppDataLocalTempjieba.cache [("自然語言", "l"), ("處理", "v"), ("是", "v"), ("人工智能", "n"), ("和", "c"), ("語言學", "n"), ("領域", "n"), ("的", "uj"), ("分支", "n"), ("學科", "n"), ("。", "x"), ("此", "zg"), ("領域", "n"), ("探討", "v"), ("如何", "r"), ("處理", "v"), ("及", "c"), ("運用", "vn"), ("自然語言", "l"), (";", "x"), ("自然語言", "l"), ("處理", "v"), ("包括", "v"), ("多方面", "m"), ("和", "c"), ("步驟", "n"), (",", "x"), ("基本", "n"), ("有", "v"), ("認知", "v"), ("、", "x"), ("理解", "v"), ("、", "x"), ("生成", "v"), ("等", "u"), ("部分", "n"), ("。", "x")] Loading model cost 1.500 seconds. Prefix dict has been built successfully.
獲取詞性
通過 jieba.posseg 模式實現詞性標注.
import jieba.posseg as psg # 定義文本 content = "自然語言處理是人工智能和語言學領域的分支學科。此領域探討如何處理及運用自然語言;自然語言處理包括多方面和步驟,基本有認知、理解、生成等部分。" # 分詞 seg = psg.lcut(content) # 獲取詞性 part_of_speech = [(x.word, x.flag) for x in seg] # 調試輸出 print(part_of_speech)
輸出結果:
Building prefix dict from the default dictionary ... Loading model from cache C:UsersWindowsAppDataLocalTempjieba.cache [("自然語言", "l"), ("處理", "v"), ("是", "v"), ("人工智能", "n"), ("和", "c"), ("語言學", "n"), ("領域", "n"), ("的", "uj"), ("分支", "n"), ("學科", "n"), ("。", "x"), ("此", "zg"), ("領域", "n"), ("探討", "v"), ("如何", "r"), ("處理", "v"), ("及", "c"), ("運用", "vn"), ("自然語言", "l"), (";", "x"), ("自然語言", "l"), ("處理", "v"), ("包括", "v"), ("多方面", "m"), ("和", "c"), ("步驟", "n"), (",", "x"), ("基本", "n"), ("有", "v"), ("認知", "v"), ("、", "x"), ("理解", "v"), ("、", "x"), ("生成", "v"), ("等", "u"), ("部分", "n"), ("。", "x")] Loading model cost 1.500 seconds. Prefix dict has been built successfully.
以上就是Python機器學習NLP自然語言處理基本操作之精確分詞的詳細內容,更多關于Python機器學習NLP自然語言處理的資料請關注服務器之家其它相關文章!
原文鏈接:https://blog.csdn.net/weixin_46274168/article/details/120107261