基于多项研究,我发现以下重要的比较分析:

comparative analysis

如果我们看文本,词汇化很可能会返回更正确的输出,对吗?不仅正确,而且缩短了版本,我在这条线上做了一个实验:

sentence ="having playing  in today gaming ended with greating victorious"

但是当我运行了lemmatizer和stemmization的代码时,我得到了以下结果: ['have', 'play', 'in', 'today', 'game', 'end', 'with', 'great', 'victori'] ['having', 'playing', 'in', 'today', 'gaming', 'ended', 'with', 'greating', 'victorious']

第一个是词干,一切看起来都很好,除了Victori(应该是胜利的权利)和第二个是Lemmalized(它们都是正确的,但在原始的形式),那么在这种情况下,哪个选项是好的?短版本和大部分不正确的还是长版本和正确的?

        import nltk
        from nltk.tokenize import word_tokenize,sent_tokenize
        from nltk.corpus import stopwords
        from sklearn.feature_extraction.text import  CountVectorizer
        from nltk.stem import PorterStemmer,WordNetLemmatizer
        mylematizer =WordNetLemmatizer()
        mystemmer =PorterStemmer()
        nltk.download('stopwords')
        sentence ="having playing  in today gaming ended with greating victorious"
        words =word_tokenize(sentence)
        # print(words)
        stemmed =[mystemmer.stem(w)  for w in words]
        lematized=[mylematizer.lemmatize(w) for w in words ]
        print(stemmed)
        print(lematized)
        # mycounter =CountVectorizer()
        # mysentence ="i love ibsu. because ibsu is great university"
        # # print(word_tokenize(mysentence))
        # # print(sent_tokenize(mysentence))
        # individual_words=word_tokenize(mysentence)
        # stops =list(stopwords.words('english'))
        # words =[w  for w in  individual_words if w not in  stops  and  w.isalnum() ]
        # reduced =[mystemmer.stem(w) for w  in words]
        
        # new_sentence =' '.join(words)
        # frequencies =mycounter.fit_transform([new_sentence])
        # print(frequencies.toarray())
        # print(mycounter.vocabulary_)
        # print(mycounter.get_feature_names_out())
        # print(new_sentence)
        # print(words)
        # # print(list(stopwords.words('english')))

推荐答案

以下是词汇化器为字符串中的单词使用的词性的示例:

import nltk
nltk.download('wordnet')
from nltk.corpus import wordnet
from nltk.stem.wordnet import WordNetLemmatizer
from nltk import word_tokenize, pos_tag
from collections import defaultdict

tag_map = defaultdict(lambda : wordnet.NOUN)
tag_map['J'] = wordnet.ADJ
tag_map['V'] = wordnet.VERB
tag_map['R'] = wordnet.ADV

sentence = "having playing in today gaming ended with greating victorious"
tokens = word_tokenize(sentence)
wnl = WordNetLemmatizer()
for token, tag in pos_tag(tokens):
    print('found tag', tag[0])
    lemma = wnl.lemmatize(token, tag_map[tag[0]])
    print(token, "lemmatized to", lemma)

输出:

found tag V
having lemmatized to have
found tag N
playing lemmatized to playing
found tag I
in lemmatized to in
found tag N
today lemmatized to today
found tag N
gaming lemmatized to gaming
found tag V
ended lemmatized to end
found tag I
with lemmatized to with
found tag V
greating lemmatized to greating
found tag J
victorious lemmatized to victorious

词汇化将词语提炼成它们的基本形式.它类似于词干,但给词带来了上下文,从而将意思相似的词与一个词联系在一起.这个奇特的语言学词汇是"词法".那么,在一种特定的语言中,这些词是如何相互联系的呢?如果您查看上面的输出,ing动词将被解析为名词.ING动词,除了动词,还可以用作名词:我喜欢游泳.动词是爱,名词是游泳.这就是上面对标签的解释.老实说,你上面的这句话根本不是一个句子.我不会说其中一个比另一个更正确,但当词性在一个有独立从句或有独立从句的句子中正确使用时,认为词汇化更有效.

Python相关问答推荐

即使在可见的情况下也不相互作用

max_of_three使用First_select、second_select、

_repr_html_实现自定义__getattr_时未显示

Excel图表-使用openpyxl更改水平轴与Y轴相交的位置(Python)

在Python中动态计算范围

Python导入某些库时非法指令(核心转储)(beautifulsoup4."" yfinance)

为什么Django管理页面和我的页面的其他CSS文件和图片都找不到?'

使用Python查找、替换和调整PDF中的图像'

网格基于1.Y轴与2.x轴显示在matplotlib中

Flash只从html表单中获取一个值

寻找Regex模式返回与我当前函数类似的结果

OpenCV轮廓.很难找到给定图像的所需轮廓

基于多个数组的多个条件将值添加到numpy数组

查看pandas字符列是否在字符串列中

Tensorflow tokenizer问题.num_words到底做了什么?

如何写一个polars birame到DuckDB

设置索引值每隔17行左右更改的索引

Pythonquests.get(Url)返回Colab中的空内容

文本溢出了Kivy的视区

如何在基于时间的数据帧中添加计算值