Debug School

rakesh kumar
rakesh kumar

Posted on • Updated on

Explain the terminology tokenization,steaming,lemmatiazation and stopword in NLP

Tokenization
Stemming
Stopwords
Lemmatization
In NLP (Natural Language Processing), tokenization, stemming, and stopwords are common techniques used for text preprocessing. Let's explain each of these concepts with examples and their respective outputs:

Tokenization:
Tokenization is the process of splitting a text into individual units, known as tokens. Tokens can be words, phrases, or even characters, depending on the granularity required for analysis. Tokenization serves as the foundation for various NLP tasks, such as text classification, information retrieval, and language modeling.
Example: Consider the sentence "I love to eat pizza." After tokenization, the sentence can be split into the following tokens:

"I"
"love"
"to"
"eat"
"pizza"
Enter fullscreen mode Exit fullscreen mode
Output: Tokenized tokens: ["I", "love", "to", "eat", "pizza"]
Enter fullscreen mode Exit fullscreen mode

Stemming:
Stemming is a process that reduces words to their base or root form. It aims to normalize words by removing suffixes or prefixes, thereby reducing inflected or derived words to their core form. Stemming helps in handling variations of words and reducing the vocabulary size.
Example: Let's take the word "running" as an example. After stemming, the word would be reduced to its base form, which is "run".

Output: Stemmed word: "run"

Original tokens: ['I', 'love', 'to', 'eat', 'pizza', '.', 'Eating', 'pizzas', 'is', 'my', 'favorite', 'activity', '.']
Stemmed tokens: ['I', 'love', 'to', 'eat', 'pizza', '.', 'eat', 'pizza', 'is', 'my', 'favorit', 'activ', '.']
Enter fullscreen mode Exit fullscreen mode

Stopwords:
Stopwords are commonly occurring words that carry little or no meaningful information in a given text. These words, such as "and," "the," "is," etc., are often removed during text preprocessing to improve processing efficiency and focus on more important words.
Example: Suppose we have the sentence "The weather is nice and sunny." The stopwords in this sentence would be "the" and "is," as they are common words that do not contribute much to the overall meaning of the sentence.

Output: Removed stopwords: ["The", "is"]
Enter fullscreen mode Exit fullscreen mode

It's important to note that the specific set of stopwords may vary depending on the application or language being analyzed. Different NLP libraries and frameworks provide pre-defined sets of stopwords, but custom lists can also be created based on the specific context or domain.

Lemmatization:
Lemmatization reduces words to their base or dictionary form (lemma), considering the part of speech and grammatical context. It ensures that the resulting lemma is a valid word.
Example:

Input: "The cats are running and eating."
Output: Lemmatized text: "The cat be run and eat."
Enter fullscreen mode Exit fullscreen mode

Cosine Similarity:
Cosine similarity is a measure used to determine the similarity between two vectors or documents. It calculates the cosine of the angle between the vectors, representing how similar or related they are. It is commonly used in text mining and information retrieval tasks, such as document similarity and clustering.
Example: Suppose we have two documents represented as vectors:

Document 1: "I love to eat pizza."
Document 2: "I enjoy eating pizza."
Enter fullscreen mode Exit fullscreen mode

By converting these documents into vector representations, such as using the Bag-of-Words model, we can calculate the cosine similarity between the vectors. The cosine similarity ranges from 0 to 1, where 1 indicates maximum similarity and 0 indicates no similarity.

Output: Cosine similarity between Document 1 and Document 2: 0.82

By applying tokenization, stemming, and stopwords removal, the text data can be preprocessed and transformed into a more manageable format for further analysis or modeling tasks in NLP. These techniques help in reducing noise, improving computational efficiency, and extracting more meaningful insights from the text data.

Implemention

how you can perform tokenization, stemming, and stopwords removal using Django and the NLTK (Natural Language Toolkit) library:

Install the NLTK library:

pip install nltk
Enter fullscreen mode Exit fullscreen mode

Import the necessary modules in your Django views.py file:

from django.shortcuts import render
from nltk.tokenize import word_tokenize
from nltk.stem import PorterStemmer
from nltk.corpus import stopwords
import nltk
nltk.download('punkt')
nltk.download('stopwords')
Define a function in your Django views.py file to perform the preprocessing:


def preprocess_text(request):
    text = "I love to eat pizza."

    # Tokenization
    tokens = word_tokenize(text)

    # Stemming
    stemmer = PorterStemmer()
    stemmed_tokens = [stemmer.stem(token) for token in tokens]

    # Stopwords removal
    stop_words = set(stopwords.words('english'))
    filtered_tokens = [token for token in stemmed_tokens if token.lower() not in stop_words]

    return render(request, 'preprocess.html', {'tokens': tokens, 'filtered_tokens': filtered_tokens})
Enter fullscreen mode Exit fullscreen mode

Create a template file named preprocess.html to display the results:

<h3>Tokenized Tokens:</h3>
<ul>
    {% for token in tokens %}
        <li>{{ token }}</li>
    {% endfor %}
</ul>


<h3>Filtered Tokens (after stemming and stopwords removal):</h3>
<ul>
    {% for token in filtered_tokens %}
        <li>{{ token }}</li>
    {% endfor %}
</ul>
Enter fullscreen mode Exit fullscreen mode

Finally, update your Django urls.py file to map the URL to the preprocess_text function:

from django.urls import path
from .views import preprocess_text

urlpatterns = [
    path('preprocess/', preprocess_text, name='preprocess_text'),
]
Enter fullscreen mode Exit fullscreen mode

Now, when you visit the "/preprocess/" URL in your Django app, it will tokenize the text, perform stemming, remove stopwords, and display the results in the preprocess.html template.

Note: Make sure you have NLTK and its required resources downloaded before running the Django server.

Another Example

Install the NLTK library:

pip install nltk
Enter fullscreen mode Exit fullscreen mode

Import the necessary modules in your Django views.py file:

from django.shortcuts import render
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer, PorterStemmer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import cosine_similarity
import nltk
nltk.download('punkt')
nltk.download('wordnet')
Define a function in your Django views.py file to perform the preprocessing and cosine similarity calculation:




def preprocess_and_similarity(request):
    text1 = "I love to eat pizza."
    text2 = "I enjoy eating pizza."

    # Tokenization
    tokens1 = word_tokenize(text1)
    tokens2 = word_tokenize(text2)

    # Lemmatization
    lemmatizer = WordNetLemmatizer()
    lemmatized_tokens1 = [lemmatizer.lemmatize(token) for token in tokens1]
    lemmatized_tokens2 = [lemmatizer.lemmatize(token) for token in tokens2]

    # Stemming
    stemmer = PorterStemmer()
    stemmed_tokens1 = [stemmer.stem(token) for token in tokens1]
    stemmed_tokens2 = [stemmer.stem(token) for token in tokens2]

    # Cosine similarity
    documents = [text1, text2]
    vectorizer = TfidfVectorizer()
    tfidf_matrix = vectorizer.fit_transform(documents)
    cosine_sim = cosine_similarity(tfidf_matrix[0], tfidf_matrix[1])[0][0]

    return render(request, 'preprocess_similarity.html', {'lemmatized_tokens1': lemmatized_tokens1, 'lemmatized_tokens2': lemmatized_tokens2, 'stemmed_tokens1': stemmed_tokens1, 'stemmed_tokens2': stemmed_tokens2, 'cosine_sim': cosine_sim})
Enter fullscreen mode Exit fullscreen mode

Create a template file named preprocess_similarity.html to display the results:

<h3>Lemmatized Tokens:</h3>
<h4>Text 1:</h4>
<ul>
    {% for token in lemmatized_tokens1 %}
        <li>{{ token }}</li>
    {% endfor %}
</ul>
<h4>Text 2:</h4>
<ul>
    {% for token in lemmatized_tokens2 %}
        <li>{{ token }}</li>
    {% endfor %}
</ul>

<h3>Stemmed Tokens:</h3>
<h4>Text 1:</h4>
<ul>
    {% for token in stemmed_tokens1 %}
        <li>{{ token }}</li>
    {% endfor %}
</ul>
<h4>Text 2:</h4>
<ul>
    {% for token in stemmed_tokens2 %}
        <li>{{ token }}</li>
    {% endfor %}
</ul>

<h3>Cosine Similarity:</h3>
<p>{{ cosine_sim }}</p>
Enter fullscreen mode Exit fullscreen mode

Finally, update your Django urls.py file to map the URL to the preprocess_and_similarity function:

from django.urls import path
from .views import preprocess_and_similarity

urlpatterns = [
    path('preprocess_similarity/', preprocess_and_similarity,name='preprocess_and_similarity'),
]
Enter fullscreen mode Exit fullscreen mode

Top comments (0)