BogoToBogo
  • Home
  • About
  • Big Data
  • Machine Learning
  • AngularJS
  • Python
  • C++
  • go
  • DevOps
  • Kubernetes
  • Algorithms
  • More...
    • Qt 5
    • Linux
    • FFmpeg
    • Matlab
    • Django 1.8
    • Ruby On Rails
    • HTML5 & CSS

Machine Learning (Natural Language Processing - NLP) : Sentiment Analysis I





Bookmark and Share





bogotobogo.com site search:




Sentiment analysis

Sentiment analysis (opinion mining) is a subfield of natural language processing (NLP) and it is widely applied to reviews and social media ranging from marketing to customer service.

With this series of articles on sentiment analysis, we'll learn how to encode a document as a feature vector using the bag-of-words model. Then, we'll see how to weight the term frequency by relevance using term frequency-inverse document frequency.

Since working with a huge data can be quite expensive due to the large feature vectors that are created during this process, we'll also learn how to train a machine learning algorithm without loading the whole dataset into a computer's memory. We'll take out-of-core learning approach.




Natural Language Processing (NLP): Sentiment Analysis I (IMDb & bag-of-words)

Natural Language Processing (NLP): Sentiment Analysis II (tokenization, stemming, and stop words)

Natural Language Processing (NLP): Sentiment Analysis III (training & cross validation)

Natural Language Processing (NLP): Sentiment Analysis IV (out-of-core)


In this article, we will see how to use machine learning algorithms for classifying the attitude of a writer with regard to a particular topic or the overall contextual polarity of a document.








Internet Movie Database (IMDb)

For our sentiment analysis, we will use a large dataset of movie reviews from the Internet Movie Database (IMDb) that has been collected by Maas et al. Learning Word Vectors for Sentiment Analysis. In the proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142-150, Portland, Oregon, USA, June 2011. Association for Computational Linguistics).

It has 50,000 polar movie reviews, and they are labeled as either positive or negative.

A movie was rated positive if it has more than six stars and a movie was rated negative if it has fewer than five stars on IMDb.

We can get the gz file:

$ wget http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
$ tar xvzf aclImdb_v1.tar.gz

The archive provids a set of 25,000 highly polar movie reviews for training, and 25,000 for testing. Also, it has additional unlabeled data as well.

aclImdb-tree.png

Now we want to assemble the individual text documents from the decompressed download archive into a single CSV file.

In the following code section, we will be reading the movie reviews into a pandas DataFrame object, and it may take a while (~10 minutes).

IMDB-pandas-read.png

Let's see how our Dataframe looks like:

IMDB-pandas-read2.png

As we can see from the Jupyter notebook, we iterated over the train and test subdirectories in the main aclImdb directory and read the individual text files from the pos and neg subdirectories. We appended them to the DataFrame df with an integer class label (1 = positive and 0 = negative).

The class labels in the assembled dataset are sorted, so we may want to shuffle DataFrame using the permutation() function from the np.random submodule. This will be helpful to split the dataset into training and test sets in later sections when we stream the data from our local drive directly.
We may also want to store the assembled and shuffled movie review dataset as a CSV file:

To-csv.png



Bag-of-words model
The bag-of-words model is a simplifying representation used in natural language processing and information retrieval (IR). In this model, a text (such as a sentence or a document) is represented as the bag (multiset) of its words, disregarding grammar and even word order but keeping multiplicity - from wiki

The bag-of-words model allows us to represent text as numerical feature vectors. It is commonly used in methods of document classification where the (frequency of) occurrence of each word is used as a feature for training a classifier. It can be summarized as follows:

  1. Create a vocabulary of unique tokens from the entire set of documents.
  2. Construct a feature vector from each document that contains the counts of how often each word occurs in the particular document.

The feature vectors consist of mostly zeros and is called sparse since the unique words in each document represent only a small subset of all the words in the bag-of-words vocabulary.





Constructing Bag-of-words model with scikit-learn

We can use sklearn.feature_extraction.text.CountVectorizer to construct a bag-of-words model based on the word counts in the respective documents. It implements both tokenization and occurrence counting in a single class:

vectorizer-CountVectorizer.png

Let’s use it to tokenize and count the word occurrences with corpus of text documents:

CountVectorizer.png

The fit_transform method on CountVectorizer constructs the vocabulary of the bag-of-words model. The default configuration tokenizes the string by extracting words of at least 2 letters. The two sentences are transformed into sparse feature vectors:

FeatureVectors.png

Each index position in the feature vectors shown here corresponds to the integer values that are stored as dictionary items in the CountVectorizer vocabulary.

The words are stored in a Python dictionary, which maps the unique words that are mapped to integer indices:

vectorizer-vocabulary.png

We have 9 vocabularies, 'and', 'document',..., 'this'.

The converse mapping from feature name to column index is stored in the vocabulary_ attribute of the vectorizer:

VectorizerVocabularyGet.png

So, any words that were not seen in the training corpus will be completely ignored in future calls to the transform method:

UnseenWords.png

ref: Feature extraction





tf-idf (Term frequency-inverse document frequency)

When analyzing text data, there are words that occur across multiple documents. Those frequently occurring words typically don't contain useful or discriminatory information.

In other words, the frequent words such as "a", "the", and "is" carry very little meaningful information about the actual contents of the document.

In this section, we will learn about a useful transformation technique called term frequency-inverse document frequency (tf-idf). We'll use it to re-weight those frequently occurring words in the feature vectors. In order to re-weight the count features into floating point values suitable for usage by a classifier we need to use the tf-idf transform which can be defined as the product of the term frequency and the inverse document frequency:

$$ \text{tf-idf}(t,d) = \text{tf}(t,d) \times \text{idf}(t,d) $$

where $\text{tf}$ means term-frequency while $\text{tf-idf}$ means term-frequency times inverse document-frequency.

The inverse document frequency $\text{idf}(t, d)$ can be defined as:

$$ \text{idf}(t,d) = log \frac {1+n_d}{1+\text{df}(d,t)} + 1$$

where $n_d$ is the total number of documents, and $\text{df}(d,t)$ is the number of documents $d$ that contain the term $t$.

Note that the $log$ is used to ensure that low document frequencies are not given too much weight.

Let's be more specific using the "corpus" document:

corpus = [
    'This is the first document.',
    'This is the second second.',
    'And the third one.',
    'Is this the first document?',
    ]

For the "is" term in the 4th document(d4). The term frequency $\text{tf}$ for "is" 1 since it appeared only once. The document frequency of this term ($\text{df(d,t)}$) is 3 since the term is occurs in three documents. The total number of document $n_d=4$, so we can calculate the $\text{idf}$ as follows:

$$ \text{idf("is",d4)} = log \frac {1+n_d}{1+\text{df}(d,t)} + 1 = log \frac {1+4}{1+3} + 1$$ $$ \text{tf-idf("is",d4)} = \text{tf}(t,d) \times \text{idf}(t,d) = 1 \times (log\frac{5}{4}+1) $$

TfidfTransformer() takes the raw term frequencies from CountVectorizer as input and transforms them into $ \text{tf-idfs}$:

TfidfTransformer.png

Here, we used the "corpus" document of the example in the previous section.

Note that the frequent term such as "the" (index 6) that appeared in all document got lower $\text{tf-idf}$ (0.24 or 0.43) because it is unlikely to contain any useful and discriminatory information while "second" (index 5) got lower $\text{tf-idf}$ (0.86) because the term "second" appeared only in one document out of four documents.

The weights of each feature computed by the fit method call are stored in a model attribute:

weight-of-each-feature.png



tf-idf with count example

The following sample is from Feature extraction.

CountsExample-1.png
CountsExample-2.png



Github Jupyter Notebook source

Github Jupyter notebook is available from Sentiment Analysis





Next: Machine Learning (Natural Language Processing - NLP): Sentiment Analysis II









Ph.D. / Golden Gate Ave, San Francisco / Seoul National Univ / Carnegie Mellon / UC Berkeley / DevOps / Deep Learning / Visualization

YouTubeMy YouTube channel

Sponsor Open Source development activities and free contents for everyone.

Thank you.

- K Hong





LIST OF ALGORITHMS



Algorithms - Introduction

Bubble Sort

Bucket Sort

Counting Sort

Heap Sort

Insertion Sort

Merge Sort

Quick Sort

Radix Sort - LSD

Selection Sort

Shell Sort



Queue/Priority Queue - Using linked list & Heap

Stack Data Structure

Trie Data Structure

Binary Tree Data Structure - BST

Hash Map/Hash Table

Linked List Data Structure

Closest Pair of Points

Spatial Data Structure and Physics Engines



Recursive Algorithms

Dynamic Programming

Knapsack Problems - Discrete Optimization

(Batch) Gradient Descent in python and scikit



Uniform Sampling on the Surface of a Sphere.

Bayes' Rule

Monty Hall Paradox

Compression Algorithm - Huffman Codes

Shannon Entropy

Path Finding Algorithm - A*

Dijkstra's Shortest Path

Prim's spanning tree algorithm in Python

Bellman-Ford Shortest Path

Encryption/Cryptography Algorithms

minHash

tf-idf weight

Natural Language Processing (NLP): Sentiment Analysis I (IMDb & bag-of-words)

Natural Language Processing (NLP): Sentiment Analysis II (tokenization, stemming, and stop words)

Natural Language Processing (NLP): Sentiment Analysis III (training & cross validation)

Natural Language Processing (NLP): Sentiment Analysis IV (out-of-core)

Locality-Sensitive Hashing (LSH) using Cosine Distance (Cosine Similarity)



Sponsor Open Source development activities and free contents for everyone.

Thank you.

- K Hong







Machine Learning with scikit-learn



scikit-learn installation

scikit-learn : Features and feature extraction - iris dataset

scikit-learn : Machine Learning Quick Preview

scikit-learn : Data Preprocessing I - Missing / Categorical data

scikit-learn : Data Preprocessing II - Partitioning a dataset / Feature scaling / Feature Selection / Regularization

scikit-learn : Data Preprocessing III - Dimensionality reduction vis Sequential feature selection / Assessing feature importance via random forests

Data Compression via Dimensionality Reduction I - Principal component analysis (PCA)

scikit-learn : Data Compression via Dimensionality Reduction II - Linear Discriminant Analysis (LDA)

scikit-learn : Data Compression via Dimensionality Reduction III - Nonlinear mappings via kernel principal component (KPCA) analysis

scikit-learn : Logistic Regression, Overfitting & regularization

scikit-learn : Supervised Learning & Unsupervised Learning - e.g. Unsupervised PCA dimensionality reduction with iris dataset

scikit-learn : Unsupervised_Learning - KMeans clustering with iris dataset

scikit-learn : Linearly Separable Data - Linear Model & (Gaussian) radial basis function kernel (RBF kernel)

scikit-learn : Decision Tree Learning I - Entropy, Gini, and Information Gain

scikit-learn : Decision Tree Learning II - Constructing the Decision Tree

scikit-learn : Random Decision Forests Classification

scikit-learn : Support Vector Machines (SVM)

scikit-learn : Support Vector Machines (SVM) II

Flask with Embedded Machine Learning I : Serializing with pickle and DB setup

Flask with Embedded Machine Learning II : Basic Flask App

Flask with Embedded Machine Learning III : Embedding Classifier

Flask with Embedded Machine Learning IV : Deploy

Flask with Embedded Machine Learning V : Updating the classifier

scikit-learn : Sample of a spam comment filter using SVM - classifying a good one or a bad one




Machine learning algorithms and concepts

Batch gradient descent algorithm

Single Layer Neural Network - Perceptron model on the Iris dataset using Heaviside step activation function

Batch gradient descent versus stochastic gradient descent

Single Layer Neural Network - Adaptive Linear Neuron using linear (identity) activation function with batch gradient descent method

Single Layer Neural Network : Adaptive Linear Neuron using linear (identity) activation function with stochastic gradient descent (SGD)

Logistic Regression

VC (Vapnik-Chervonenkis) Dimension and Shatter

Bias-variance tradeoff

Maximum Likelihood Estimation (MLE)

Neural Networks with backpropagation for XOR using one hidden layer

minHash

tf-idf weight

Natural Language Processing (NLP): Sentiment Analysis I (IMDb & bag-of-words)

Natural Language Processing (NLP): Sentiment Analysis II (tokenization, stemming, and stop words)

Natural Language Processing (NLP): Sentiment Analysis III (training & cross validation)

Natural Language Processing (NLP): Sentiment Analysis IV (out-of-core)

Locality-Sensitive Hashing (LSH) using Cosine Distance (Cosine Similarity)




Artificial Neural Networks (ANN)

[Note] Sources are available at Github - Jupyter notebook files

1. Introduction

2. Forward Propagation

3. Gradient Descent

4. Backpropagation of Errors

5. Checking gradient

6. Training via BFGS

7. Overfitting & Regularization

8. Deep Learning I : Image Recognition (Image uploading)

9. Deep Learning II : Image Recognition (Image classification)

10 - Deep Learning III : Deep Learning III : Theano, TensorFlow, and Keras




C++ Tutorials

C++ Home

Algorithms & Data Structures in C++ ...

Application (UI) - using Windows Forms (Visual Studio 2013/2012)

auto_ptr

Binary Tree Example Code

Blackjack with Qt

Boost - shared_ptr, weak_ptr, mpl, lambda, etc.

Boost.Asio (Socket Programming - Asynchronous TCP/IP)...

Classes and Structs

Constructor

C++11(C++0x): rvalue references, move constructor, and lambda, etc.

C++ API Testing

C++ Keywords - const, volatile, etc.

Debugging Crash & Memory Leak

Design Patterns in C++ ...

Dynamic Cast Operator

Eclipse CDT / JNI (Java Native Interface) / MinGW

Embedded Systems Programming I - Introduction

Embedded Systems Programming II - gcc ARM Toolchain and Simple Code on Ubuntu and Fedora

Embedded Systems Programming III - Eclipse CDT Plugin for gcc ARM Toolchain

Exceptions

Friend Functions and Friend Classes

fstream: input & output

Function Overloading

Functors (Function Objects) I - Introduction

Functors (Function Objects) II - Converting function to functor

Functors (Function Objects) - General



Git and GitHub Express...

GTest (Google Unit Test) with Visual Studio 2012

Inheritance & Virtual Inheritance (multiple inheritance)

Libraries - Static, Shared (Dynamic)

Linked List Basics

Linked List Examples

make & CMake

make (gnu)

Memory Allocation

Multi-Threaded Programming - Terminology - Semaphore, Mutex, Priority Inversion etc.

Multi-Threaded Programming II - Native Thread for Win32 (A)

Multi-Threaded Programming II - Native Thread for Win32 (B)

Multi-Threaded Programming II - Native Thread for Win32 (C)

Multi-Threaded Programming II - C++ Thread for Win32

Multi-Threaded Programming III - C/C++ Class Thread for Pthreads

MultiThreading/Parallel Programming - IPC

Multi-Threaded Programming with C++11 Part A (start, join(), detach(), and ownership)

Multi-Threaded Programming with C++11 Part B (Sharing Data - mutex, and race conditions, and deadlock)

Multithread Debugging

Object Returning

Object Slicing and Virtual Table

OpenCV with C++

Operator Overloading I

Operator Overloading II - self assignment

Pass by Value vs. Pass by Reference

Pointers

Pointers II - void pointers & arrays

Pointers III - pointer to function & multi-dimensional arrays

Preprocessor - Macro

Private Inheritance

Python & C++ with SIP

(Pseudo)-random numbers in C++

References for Built-in Types

Socket - Server & Client

Socket - Server & Client 2

Socket - Server & Client 3

Socket - Server & Client with Qt (Asynchronous / Multithreading / ThreadPool etc.)

Stack Unwinding

Standard Template Library (STL) I - Vector & List

Standard Template Library (STL) II - Maps

Standard Template Library (STL) II - unordered_map

Standard Template Library (STL) II - Sets

Standard Template Library (STL) III - Iterators

Standard Template Library (STL) IV - Algorithms

Standard Template Library (STL) V - Function Objects

Static Variables and Static Class Members

String

String II - sstream etc.

Taste of Assembly

Templates

Template Specialization

Template Specialization - Traits

Template Implementation & Compiler (.h or .cpp?)

The this Pointer

Type Cast Operators

Upcasting and Downcasting

Virtual Destructor & boost::shared_ptr

Virtual Functions



Programming Questions and Solutions ↓

Strings and Arrays

Linked List

Recursion

Bit Manipulation

Small Programs (string, memory functions etc.)

Math & Probability

Multithreading

140 Questions by Google



Qt 5 EXPRESS...

Win32 DLL ...

Articles On C++

What's new in C++11...

C++11 Threads EXPRESS...

Go Tutorial

OpenCV...


List of Design Patterns



Introduction

Abstract Factory Pattern

Adapter Pattern

Bridge Pattern

Chain of Responsibility

Command Pattern

Composite Pattern

Decorator Pattern

Delegation

Dependency Injection(DI) and Inversion of Control(IoC)

Façade Pattern

Factory Method

Model View Controller (MVC) Pattern

Observer Pattern

Prototype Pattern

Proxy Pattern

Singleton Pattern

Strategy Pattern

Template Method Pattern








Contact

BogoToBogo
contactus@bogotobogo.com

Follow Bogotobogo

About Us

contactus@bogotobogo.com

YouTubeMy YouTube channel
Pacific Ave, San Francisco, CA 94115

Pacific Ave, San Francisco, CA 94115

Copyright © 2024, bogotobogo
Design: Web Master