BogoToBogo
  • Home
  • About
  • Big Data
  • Machine Learning
  • AngularJS
  • Python
  • C++
  • go
  • DevOps
  • Kubernetes
  • Algorithms
  • More...
    • Qt 5
    • Linux
    • FFmpeg
    • Matlab
    • Django 1.8
    • Ruby On Rails
    • HTML5 & CSS

Machine Learning (Natural Language Processing - NLP) : Sentiment Analysis III





Bookmark and Share





bogotobogo.com site search:




Note

In my previous article (Machine Learning (Natural Language Processing - NLP) : Sentiment Analysis II), we learned about the tokenization via stemmer and stop-words.

In this article, we are going to train a logistic regression model for document classification.




Natural Language Processing (NLP): Sentiment Analysis I (IMDb & bag-of-words)

Natural Language Processing (NLP): Sentiment Analysis II (tokenization, stemming, and stop words)

Natural Language Processing (NLP): Sentiment Analysis III (training & cross validation)

Natural Language Processing (NLP): Sentiment Analysis IV (out-of-core)







Training a model

We're now almost ready to classify the movie reviews into positive and negative reviews.

First of all, we want to divide the DataFrame data which we cleaned-up in previous articles into 25,000/25,000 documents for training/testing:

train-testing-split.png

Next, using 5-fold stratified cross-validation, we will use a GridSearchCV object to find the optimal set of parameters for our logistic regression model:

LogisticRegression_pipeline.png

The sklearn.model_selection.GridSearchCV returns training score after exhaustive search over specified parameter values for an estimator. It implements a "fit" and a "score" method which we are going to use once the grid search finish.

It also implements "predict", "predict_proba", "decision_function", "transform" and "inverse_transform" if they are implemented in the estimator used:

GridSearchCV.png

The parameters of the estimator used to apply these methods are optimized by cross-validated grid-search over a parameter grid.

Here is the full code:

TrainingCode.png

In the code we're using the TfidfVectorizer instead of CountVectorizer and TfidfTransformer.

The param_grid consisted of two parameter dictionaries:

  1. For the first dictionary, we used the TfidfVectorizer with its default settings (use_idf=True, smooth_idf=True, and norm='l2') to calculate the tf-idfs.
  2. For the second dictionary, we set those parameters to use_idf=False, smooth_idf=False, and norm=None in order to train a model based on raw term frequencies.

Regarding the logistic regression classifier itself, we trained models using L2 and L1 regularization via the penalty parameter and compared different regularization strengths by defining a range of values for the inverse-regularization parameter C.

Note that we used integer value of 5 for cv which determines the cross-validation splitting strategy. If None is given, the default is 3-fold cross validation will be used.


Because the number of feature vectors and vocabularies make the grid search computationally quite expensive, we restricted ourselves to a limited number of parameter combinations when we initialize the GridSearchCV object and its parameter grid.

Depending on th computer, it may take up to couple of hours.

gl_lr_tfidf.png

Once the grid search has finished, we can print the best parameter set which gave the best results on the hold out data:

best_params.png

From the output, we got the best grid search results using the regular tokenizer without Porter stemming nor stop-word library. The tf-idfs we got uses the combination of a logistic regression classifier with L2 regularization of the regularization strength C=10.0.


Using the best model from the grid search, we can get the output for the 5-fold cross-validation accuracy scores on the training set and the classification accuracy on the test dataset:

best_score_best_estimator.png

Here the best_estimator_ is the estimator that was chosen by the search, i.e. estimator which gave highest score on the left out data and the best_score_ is the score of best_estimator on the left out data.

The output shows us that our machine learning model can predict whether a movie review is positive or negative with almost 90 percent accuracy.





k-fold cross-validation score

In the previous secion, the best_score_ attribute returns the average score over the 5-folds of the best model since we used cv=5 for GridSearchCV().

In this section, we'll illustrate how the cross-validation works via a simple data set of random integers that represent our class labels. We'll compare GridSearchCV() with StratifiedKFold().

Here is the code to generate the simple dataset:

K-fold-cross-validation-Xy.png

Now we're going to make cross-validation object is a variation of KFold that returns stratified folds using sklearn.model_selection.StratifiedKFold() which is defined as the following:

StratifiedKFold.png

Let's run it on our sample data:

cv5-from-StratifiedKFold.png

It will generate indices to split data into training and test set. Note that we used split(X, y) method on the output.

To evaluate a score by cross-validation, we'll use sklearn.model_selection.cross_val_score() which is defined as the following:

cross_val_score_def.png

We get the following output if we run it:

cross-val-score-run.png

We fed the indices of 5 cross-validation folds (cv3) to the cross_val_score scorer(). It returned 5 accuracy scores for the 5 test folds.


Now, we'll use the GridSearchCV for exhaustive search over specified parameter values for an LogisticRegression estimator and feed it the same 5 cross-validation sets via the pre-generated cv3 indices:

GridSearchCVwithLogisticRegressionEstimator.png

From the output we can see the scores for the 5 folds are exactly the same as the ones from cross_val_score via StratifiedKFold().


How about the score? Will it be the same?

The bestscore attribute of the GridSearchCV object is available only after fitting, so it now just returns the average accuracy score of the best model:

BestScoreFromGridSearchCV.png

Let's compare it with the average score computed the cross_val_score:

CrossValScore.png

As we can see, the result are indeed consistent!





Github Jupyter Notebook source

Github Jupyter notebook is available from Sentiment Analysis




Refs

"Python Machine Learning" by Sebastian Raschka









Ph.D. / Golden Gate Ave, San Francisco / Seoul National Univ / Carnegie Mellon / UC Berkeley / DevOps / Deep Learning / Visualization

YouTubeMy YouTube channel

Sponsor Open Source development activities and free contents for everyone.

Thank you.

- K Hong





LIST OF ALGORITHMS



Algorithms - Introduction

Bubble Sort

Bucket Sort

Counting Sort

Heap Sort

Insertion Sort

Merge Sort

Quick Sort

Radix Sort - LSD

Selection Sort

Shell Sort



Queue/Priority Queue - Using linked list & Heap

Stack Data Structure

Trie Data Structure

Binary Tree Data Structure - BST

Hash Map/Hash Table

Linked List Data Structure

Closest Pair of Points

Spatial Data Structure and Physics Engines



Recursive Algorithms

Dynamic Programming

Knapsack Problems - Discrete Optimization

(Batch) Gradient Descent in python and scikit



Uniform Sampling on the Surface of a Sphere.

Bayes' Rule

Monty Hall Paradox

Compression Algorithm - Huffman Codes

Shannon Entropy

Path Finding Algorithm - A*

Dijkstra's Shortest Path

Prim's spanning tree algorithm in Python

Bellman-Ford Shortest Path

Encryption/Cryptography Algorithms

minHash

tf-idf weight

Natural Language Processing (NLP): Sentiment Analysis I (IMDb & bag-of-words)

Natural Language Processing (NLP): Sentiment Analysis II (tokenization, stemming, and stop words)

Natural Language Processing (NLP): Sentiment Analysis III (training & cross validation)

Natural Language Processing (NLP): Sentiment Analysis IV (out-of-core)

Locality-Sensitive Hashing (LSH) using Cosine Distance (Cosine Similarity)



Sponsor Open Source development activities and free contents for everyone.

Thank you.

- K Hong







Machine Learning with scikit-learn



scikit-learn installation

scikit-learn : Features and feature extraction - iris dataset

scikit-learn : Machine Learning Quick Preview

scikit-learn : Data Preprocessing I - Missing / Categorical data

scikit-learn : Data Preprocessing II - Partitioning a dataset / Feature scaling / Feature Selection / Regularization

scikit-learn : Data Preprocessing III - Dimensionality reduction vis Sequential feature selection / Assessing feature importance via random forests

Data Compression via Dimensionality Reduction I - Principal component analysis (PCA)

scikit-learn : Data Compression via Dimensionality Reduction II - Linear Discriminant Analysis (LDA)

scikit-learn : Data Compression via Dimensionality Reduction III - Nonlinear mappings via kernel principal component (KPCA) analysis

scikit-learn : Logistic Regression, Overfitting & regularization

scikit-learn : Supervised Learning & Unsupervised Learning - e.g. Unsupervised PCA dimensionality reduction with iris dataset

scikit-learn : Unsupervised_Learning - KMeans clustering with iris dataset

scikit-learn : Linearly Separable Data - Linear Model & (Gaussian) radial basis function kernel (RBF kernel)

scikit-learn : Decision Tree Learning I - Entropy, Gini, and Information Gain

scikit-learn : Decision Tree Learning II - Constructing the Decision Tree

scikit-learn : Random Decision Forests Classification

scikit-learn : Support Vector Machines (SVM)

scikit-learn : Support Vector Machines (SVM) II

Flask with Embedded Machine Learning I : Serializing with pickle and DB setup

Flask with Embedded Machine Learning II : Basic Flask App

Flask with Embedded Machine Learning III : Embedding Classifier

Flask with Embedded Machine Learning IV : Deploy

Flask with Embedded Machine Learning V : Updating the classifier

scikit-learn : Sample of a spam comment filter using SVM - classifying a good one or a bad one




Machine learning algorithms and concepts

Batch gradient descent algorithm

Single Layer Neural Network - Perceptron model on the Iris dataset using Heaviside step activation function

Batch gradient descent versus stochastic gradient descent

Single Layer Neural Network - Adaptive Linear Neuron using linear (identity) activation function with batch gradient descent method

Single Layer Neural Network : Adaptive Linear Neuron using linear (identity) activation function with stochastic gradient descent (SGD)

Logistic Regression

VC (Vapnik-Chervonenkis) Dimension and Shatter

Bias-variance tradeoff

Maximum Likelihood Estimation (MLE)

Neural Networks with backpropagation for XOR using one hidden layer

minHash

tf-idf weight

Natural Language Processing (NLP): Sentiment Analysis I (IMDb & bag-of-words)

Natural Language Processing (NLP): Sentiment Analysis II (tokenization, stemming, and stop words)

Natural Language Processing (NLP): Sentiment Analysis III (training & cross validation)

Natural Language Processing (NLP): Sentiment Analysis IV (out-of-core)

Locality-Sensitive Hashing (LSH) using Cosine Distance (Cosine Similarity)




Artificial Neural Networks (ANN)

[Note] Sources are available at Github - Jupyter notebook files

1. Introduction

2. Forward Propagation

3. Gradient Descent

4. Backpropagation of Errors

5. Checking gradient

6. Training via BFGS

7. Overfitting & Regularization

8. Deep Learning I : Image Recognition (Image uploading)

9. Deep Learning II : Image Recognition (Image classification)

10 - Deep Learning III : Deep Learning III : Theano, TensorFlow, and Keras




C++ Tutorials

C++ Home

Algorithms & Data Structures in C++ ...

Application (UI) - using Windows Forms (Visual Studio 2013/2012)

auto_ptr

Binary Tree Example Code

Blackjack with Qt

Boost - shared_ptr, weak_ptr, mpl, lambda, etc.

Boost.Asio (Socket Programming - Asynchronous TCP/IP)...

Classes and Structs

Constructor

C++11(C++0x): rvalue references, move constructor, and lambda, etc.

C++ API Testing

C++ Keywords - const, volatile, etc.

Debugging Crash & Memory Leak

Design Patterns in C++ ...

Dynamic Cast Operator

Eclipse CDT / JNI (Java Native Interface) / MinGW

Embedded Systems Programming I - Introduction

Embedded Systems Programming II - gcc ARM Toolchain and Simple Code on Ubuntu and Fedora

Embedded Systems Programming III - Eclipse CDT Plugin for gcc ARM Toolchain

Exceptions

Friend Functions and Friend Classes

fstream: input & output

Function Overloading

Functors (Function Objects) I - Introduction

Functors (Function Objects) II - Converting function to functor

Functors (Function Objects) - General



Git and GitHub Express...

GTest (Google Unit Test) with Visual Studio 2012

Inheritance & Virtual Inheritance (multiple inheritance)

Libraries - Static, Shared (Dynamic)

Linked List Basics

Linked List Examples

make & CMake

make (gnu)

Memory Allocation

Multi-Threaded Programming - Terminology - Semaphore, Mutex, Priority Inversion etc.

Multi-Threaded Programming II - Native Thread for Win32 (A)

Multi-Threaded Programming II - Native Thread for Win32 (B)

Multi-Threaded Programming II - Native Thread for Win32 (C)

Multi-Threaded Programming II - C++ Thread for Win32

Multi-Threaded Programming III - C/C++ Class Thread for Pthreads

MultiThreading/Parallel Programming - IPC

Multi-Threaded Programming with C++11 Part A (start, join(), detach(), and ownership)

Multi-Threaded Programming with C++11 Part B (Sharing Data - mutex, and race conditions, and deadlock)

Multithread Debugging

Object Returning

Object Slicing and Virtual Table

OpenCV with C++

Operator Overloading I

Operator Overloading II - self assignment

Pass by Value vs. Pass by Reference

Pointers

Pointers II - void pointers & arrays

Pointers III - pointer to function & multi-dimensional arrays

Preprocessor - Macro

Private Inheritance

Python & C++ with SIP

(Pseudo)-random numbers in C++

References for Built-in Types

Socket - Server & Client

Socket - Server & Client 2

Socket - Server & Client 3

Socket - Server & Client with Qt (Asynchronous / Multithreading / ThreadPool etc.)

Stack Unwinding

Standard Template Library (STL) I - Vector & List

Standard Template Library (STL) II - Maps

Standard Template Library (STL) II - unordered_map

Standard Template Library (STL) II - Sets

Standard Template Library (STL) III - Iterators

Standard Template Library (STL) IV - Algorithms

Standard Template Library (STL) V - Function Objects

Static Variables and Static Class Members

String

String II - sstream etc.

Taste of Assembly

Templates

Template Specialization

Template Specialization - Traits

Template Implementation & Compiler (.h or .cpp?)

The this Pointer

Type Cast Operators

Upcasting and Downcasting

Virtual Destructor & boost::shared_ptr

Virtual Functions



Programming Questions and Solutions ↓

Strings and Arrays

Linked List

Recursion

Bit Manipulation

Small Programs (string, memory functions etc.)

Math & Probability

Multithreading

140 Questions by Google



Qt 5 EXPRESS...

Win32 DLL ...

Articles On C++

What's new in C++11...

C++11 Threads EXPRESS...

Go Tutorial

OpenCV...


List of Design Patterns



Introduction

Abstract Factory Pattern

Adapter Pattern

Bridge Pattern

Chain of Responsibility

Command Pattern

Composite Pattern

Decorator Pattern

Delegation

Dependency Injection(DI) and Inversion of Control(IoC)

Façade Pattern

Factory Method

Model View Controller (MVC) Pattern

Observer Pattern

Prototype Pattern

Proxy Pattern

Singleton Pattern

Strategy Pattern

Template Method Pattern








Contact

BogoToBogo
contactus@bogotobogo.com

Follow Bogotobogo

About Us

contactus@bogotobogo.com

YouTubeMy YouTube channel
Pacific Ave, San Francisco, CA 94115

Pacific Ave, San Francisco, CA 94115

Copyright © 2024, bogotobogo
Design: Web Master