Feature Transformer VectorAssembler in PySpark ML Feature - Part 3

Feature Transformer VectorAssembler in PySpark ML Feature - Part 3

What is VectorAssembler?

class pyspark.ml.feature.VectorAssembler(inputCols=None, outputCol=None, handleInvalid='error'):

VectorAssembler is a transformer that combines a given list of columns into a single vector column.

It is useful for combining raw features and features generated by different feature transformers into a single feature vector, in order to train ML models like logistic regression and decision trees.

VectorAssembler accepts the following input column types: all numeric types, boolean type, and vector type. In each row, the values of the input columns will be concatenated into a vector in the specified order.

Note: For VectorAssembler, we do not need StringIndexer and OneHotEncoder, if your data have all numeric values. In this example we have string columns, so we are using StringIndexer and OneHotEncoder.

Part 1 - What is StringIndexer?

We have already discussed regarding StringIndexer (link)

Part 2 - What is OneHotEncoder?

We have already discussed regarding OneHotEncoder (link)

Let us see an example

Create SparkSession

In [1]:
#import SparkSession
from pyspark.sql import SparkSession

SparkSession is an entry point to Spark to work with RDD, DataFrame, and Dataset. To create SparkSession in Python, we need to use the builder() method and calling getOrCreate() method.

If SparkSession already exists it returns otherwise create a new SparkSession.

In [2]:
spark = SparkSession.builder.appName('xvspark').getOrCreate()

Create dataframe by declaring the schema

In [3]:
from pyspark.sql.types import *

StructType class to define the structure of the DataFrame.

In [4]:
#create the structure of schema
schema = StructType().add("id","integer").add("name","string").add("qualification","string").add("age", "integer").add("gender", "string").add("passed", "integer")
In [5]:
#create data
data = [
    (1,'John',"B.A.", 20, "Male", 1),
    (2,'Martha',"B.Com.", 20, "Female", 1),
    (3,'Mona',"B.Com.", 21, "Female", 1),
    (4,'Harish',"B.Sc.", 22, "Male", 1),
    (5,'Jonny',"B.A.", 22, "Male", 0),
    (6,'Maria',"B.A.", 23, "Female", 1),
    (7,'Monalisa',"B.A.", 21, "Female", 0)
]
In [6]:
#create dataframe
df = spark.createDataFrame(data, schema=schema)
In [7]:
#columns of dataframe
df.columns
Out[7]:
['id', 'name', 'qualification', 'age', 'gender', 'passed']
In [8]:
df.show()
+---+--------+-------------+---+------+------+
| id|    name|qualification|age|gender|passed|
+---+--------+-------------+---+------+------+
|  1|    John|         B.A.| 20|  Male|     1|
|  2|  Martha|       B.Com.| 20|Female|     1|
|  3|    Mona|       B.Com.| 21|Female|     1|
|  4|  Harish|        B.Sc.| 22|  Male|     1|
|  5|   Jonny|         B.A.| 22|  Male|     0|
|  6|   Maria|         B.A.| 23|Female|     1|
|  7|Monalisa|         B.A.| 21|Female|     0|
+---+--------+-------------+---+------+------+

Apply StringIndexer & OneHotEncoder to qualification and gender columns

In [9]:
#import required libraries
from pyspark.ml.feature import StringIndexer

Apply StringIndexer to qualification column

In [10]:
qualification_indexer = StringIndexer(inputCol="qualification", outputCol="qualificationIndex")

#Fits a model to the input dataset with optional parameters.
df = qualification_indexer.fit(df).transform(df)
df.show()
+---+--------+-------------+---+------+------+------------------+
| id|    name|qualification|age|gender|passed|qualificationIndex|
+---+--------+-------------+---+------+------+------------------+
|  1|    John|         B.A.| 20|  Male|     1|               0.0|
|  2|  Martha|       B.Com.| 20|Female|     1|               1.0|
|  3|    Mona|       B.Com.| 21|Female|     1|               1.0|
|  4|  Harish|        B.Sc.| 22|  Male|     1|               2.0|
|  5|   Jonny|         B.A.| 22|  Male|     0|               0.0|
|  6|   Maria|         B.A.| 23|Female|     1|               0.0|
|  7|Monalisa|         B.A.| 21|Female|     0|               0.0|
+---+--------+-------------+---+------+------+------------------+

"B.A." gets index 0 because it is the most frequent, then "B.Com" gets index 1 and "B.Sc." gets index 2.

Apply StringIndexer to gender column

In [11]:
gender_indexer = StringIndexer(inputCol="gender", outputCol="genderIndex")

#Fits a model to the input dataset with optional parameters.
df = gender_indexer.fit(df).transform(df)
df.show()
+---+--------+-------------+---+------+------+------------------+-----------+
| id|    name|qualification|age|gender|passed|qualificationIndex|genderIndex|
+---+--------+-------------+---+------+------+------------------+-----------+
|  1|    John|         B.A.| 20|  Male|     1|               0.0|        1.0|
|  2|  Martha|       B.Com.| 20|Female|     1|               1.0|        0.0|
|  3|    Mona|       B.Com.| 21|Female|     1|               1.0|        0.0|
|  4|  Harish|        B.Sc.| 22|  Male|     1|               2.0|        1.0|
|  5|   Jonny|         B.A.| 22|  Male|     0|               0.0|        1.0|
|  6|   Maria|         B.A.| 23|Female|     1|               0.0|        0.0|
|  7|Monalisa|         B.A.| 21|Female|     0|               0.0|        0.0|
+---+--------+-------------+---+------+------+------------------+-----------+

Apply OneHotEncoder to qualificationIndex column

In [12]:
from pyspark.ml.feature import OneHotEncoder
In [13]:
#onehotencoder to qualificationIndex
onehotencoder_qualification_vector = OneHotEncoder(inputCol="qualificationIndex", outputCol="qualification_vec")
df = onehotencoder_qualification_vector.fit(df).transform(df)
In [14]:
df.show()
+---+--------+-------------+---+------+------+------------------+-----------+-----------------+
| id|    name|qualification|age|gender|passed|qualificationIndex|genderIndex|qualification_vec|
+---+--------+-------------+---+------+------+------------------+-----------+-----------------+
|  1|    John|         B.A.| 20|  Male|     1|               0.0|        1.0|    (2,[0],[1.0])|
|  2|  Martha|       B.Com.| 20|Female|     1|               1.0|        0.0|    (2,[1],[1.0])|
|  3|    Mona|       B.Com.| 21|Female|     1|               1.0|        0.0|    (2,[1],[1.0])|
|  4|  Harish|        B.Sc.| 22|  Male|     1|               2.0|        1.0|        (2,[],[])|
|  5|   Jonny|         B.A.| 22|  Male|     0|               0.0|        1.0|    (2,[0],[1.0])|
|  6|   Maria|         B.A.| 23|Female|     1|               0.0|        0.0|    (2,[0],[1.0])|
|  7|Monalisa|         B.A.| 21|Female|     0|               0.0|        0.0|    (2,[0],[1.0])|
+---+--------+-------------+---+------+------+------------------+-----------+-----------------+

Apply OneHotEncoder to genderIndex column

In [15]:
#onehotencoder to genderIndex
onehotencoder_gender_vector = OneHotEncoder(inputCol="genderIndex", outputCol="gender_vec")
df = onehotencoder_gender_vector.fit(df).transform(df)
In [16]:
df.show()
+---+--------+-------------+---+------+------+------------------+-----------+-----------------+-------------+
| id|    name|qualification|age|gender|passed|qualificationIndex|genderIndex|qualification_vec|   gender_vec|
+---+--------+-------------+---+------+------+------------------+-----------+-----------------+-------------+
|  1|    John|         B.A.| 20|  Male|     1|               0.0|        1.0|    (2,[0],[1.0])|    (1,[],[])|
|  2|  Martha|       B.Com.| 20|Female|     1|               1.0|        0.0|    (2,[1],[1.0])|(1,[0],[1.0])|
|  3|    Mona|       B.Com.| 21|Female|     1|               1.0|        0.0|    (2,[1],[1.0])|(1,[0],[1.0])|
|  4|  Harish|        B.Sc.| 22|  Male|     1|               2.0|        1.0|        (2,[],[])|    (1,[],[])|
|  5|   Jonny|         B.A.| 22|  Male|     0|               0.0|        1.0|    (2,[0],[1.0])|    (1,[],[])|
|  6|   Maria|         B.A.| 23|Female|     1|               0.0|        0.0|    (2,[0],[1.0])|(1,[0],[1.0])|
|  7|Monalisa|         B.A.| 21|Female|     0|               0.0|        0.0|    (2,[0],[1.0])|(1,[0],[1.0])|
+---+--------+-------------+---+------+------+------------------+-----------+-----------------+-------------+

Feature transformer - VectorAssembler

We want to combine age, qualification_vec, and gender_vec into a single feature vector called features and use it to predict passed or not.

If we set VectorAssembler's input columns to age, qualification_vec, and gender_vec and output column to features.

In [17]:
from pyspark.ml.feature import VectorAssembler
In [18]:
#dataframe columns 
df.columns
Out[18]:
['id',
 'name',
 'qualification',
 'age',
 'gender',
 'passed',
 'qualificationIndex',
 'genderIndex',
 'qualification_vec',
 'gender_vec']
In [19]:
inputCols = [
 'age',
 'qualification_vec',
 'gender_vec'
]
In [20]:
outputCol = "features"
In [21]:
df_va = VectorAssembler(inputCols = inputCols, outputCol = outputCol)
In [22]:
df = df_va.transform(df)
In [23]:
df.select(['features']).toPandas().head(5)
Out[23]:
features
0 [20.0, 1.0, 0.0, 0.0]
1 [20.0, 0.0, 1.0, 1.0]
2 [21.0, 0.0, 1.0, 1.0]
3 (22.0, 0.0, 0.0, 0.0)
4 [22.0, 1.0, 0.0, 0.0]
In [24]:
new_df = df.select(['features','passed'])
new_df.show()
+------------------+------+
|          features|passed|
+------------------+------+
|[20.0,1.0,0.0,0.0]|     1|
|[20.0,0.0,1.0,1.0]|     1|
|[21.0,0.0,1.0,1.0]|     1|
|    (4,[0],[22.0])|     1|
|[22.0,1.0,0.0,0.0]|     0|
|[23.0,1.0,0.0,1.0]|     1|
|[21.0,1.0,0.0,1.0]|     0|
+------------------+------+

Using Pipeline

In [25]:
#import module
from pyspark.ml import Pipeline

Reload Data

In [26]:
#create the structure of schema
schema = StructType().add("id","integer").add("name","string").add("qualification","string").add("age", "integer").add("gender", "string").add("passed", "integer")
In [27]:
#create data
data = [
    (1,'John',"B.A.", 20, "Male", 1),
    (2,'Martha',"B.Com.", 20, "Female", 1),
    (3,'Mona',"B.Com.", 21, "Female", 1),
    (4,'Harish',"B.Sc.", 22, "Male", 1),
    (5,'Jonny',"B.A.", 22, "Male", 0),
    (6,'Maria',"B.A.", 23, "Female", 1),
    (7,'Monalisa',"B.A.", 21, "Female", 0)
]
In [28]:
df = spark.createDataFrame(data, schema=schema)
df.show()
+---+--------+-------------+---+------+------+
| id|    name|qualification|age|gender|passed|
+---+--------+-------------+---+------+------+
|  1|    John|         B.A.| 20|  Male|     1|
|  2|  Martha|       B.Com.| 20|Female|     1|
|  3|    Mona|       B.Com.| 21|Female|     1|
|  4|  Harish|        B.Sc.| 22|  Male|     1|
|  5|   Jonny|         B.A.| 22|  Male|     0|
|  6|   Maria|         B.A.| 23|Female|     1|
|  7|Monalisa|         B.A.| 21|Female|     0|
+---+--------+-------------+---+------+------+

Create Pipeline and pass all stages

In [29]:
#Convert qualification and gender columns to numeric
qualification_indexer = StringIndexer(inputCol="qualification", outputCol="qualificationIndex")
gender_indexer = StringIndexer(inputCol="gender", outputCol="genderIndex")


#Convert qualificationIndex and genderIndex
onehot_encoder = OneHotEncoder(inputCols=["qualificationIndex", "genderIndex"],
                        outputCols=["qualification_vec", "gender_vec"])

#Merge multiple columns into a vector column
vector_assembler = VectorAssembler(inputCols=['age', 'qualification_vec', 'gender_vec'],
                          outputCol='features')


#Create pipeline and pass it to stages
pipeline = Pipeline(stages=[qualification_indexer, 
                            gender_indexer, 
                            onehot_encoder, 
                            vector_assembler
                           ])

#fit and transform
df_transformed = pipeline.fit(df).transform(df)
df_transformed.show()
+---+--------+-------------+---+------+------+------------------+-----------+-----------------+-------------+------------------+
| id|    name|qualification|age|gender|passed|qualificationIndex|genderIndex|qualification_vec|   gender_vec|          features|
+---+--------+-------------+---+------+------+------------------+-----------+-----------------+-------------+------------------+
|  1|    John|         B.A.| 20|  Male|     1|               0.0|        1.0|    (2,[0],[1.0])|    (1,[],[])|[20.0,1.0,0.0,0.0]|
|  2|  Martha|       B.Com.| 20|Female|     1|               1.0|        0.0|    (2,[1],[1.0])|(1,[0],[1.0])|[20.0,0.0,1.0,1.0]|
|  3|    Mona|       B.Com.| 21|Female|     1|               1.0|        0.0|    (2,[1],[1.0])|(1,[0],[1.0])|[21.0,0.0,1.0,1.0]|
|  4|  Harish|        B.Sc.| 22|  Male|     1|               2.0|        1.0|        (2,[],[])|    (1,[],[])|    (4,[0],[22.0])|
|  5|   Jonny|         B.A.| 22|  Male|     0|               0.0|        1.0|    (2,[0],[1.0])|    (1,[],[])|[22.0,1.0,0.0,0.0]|
|  6|   Maria|         B.A.| 23|Female|     1|               0.0|        0.0|    (2,[0],[1.0])|(1,[0],[1.0])|[23.0,1.0,0.0,1.0]|
|  7|Monalisa|         B.A.| 21|Female|     0|               0.0|        0.0|    (2,[0],[1.0])|(1,[0],[1.0])|[21.0,1.0,0.0,1.0]|
+---+--------+-------------+---+------+------+------------------+-----------+-----------------+-------------+------------------+

In [30]:
df_transformed = df_transformed.select(['features','passed'])
df_transformed.show()
+------------------+------+
|          features|passed|
+------------------+------+
|[20.0,1.0,0.0,0.0]|     1|
|[20.0,0.0,1.0,1.0]|     1|
|[21.0,0.0,1.0,1.0]|     1|
|    (4,[0],[22.0])|     1|
|[22.0,1.0,0.0,0.0]|     0|
|[23.0,1.0,0.0,1.0]|     1|
|[21.0,1.0,0.0,1.0]|     0|
+------------------+------+

You can convert it to Pandas DataFrame

In [31]:
df_transformed.toPandas()
Out[31]:
features passed
0 [20.0, 1.0, 0.0, 0.0] 1
1 [20.0, 0.0, 1.0, 1.0] 1
2 [21.0, 0.0, 1.0, 1.0] 1
3 (22.0, 0.0, 0.0, 0.0) 1
4 [22.0, 1.0, 0.0, 0.0] 0
5 [23.0, 1.0, 0.0, 1.0] 1
6 [21.0, 1.0, 0.0, 1.0] 0
In [ ]:
 

Machine Learning

  1. Deal Banking Marketing Campaign Dataset With Machine Learning

TensorFlow

  1. Difference Between Scalar, Vector, Matrix and Tensor
  2. TensorFlow Deep Learning Model With IRIS Dataset
  3. Sequence to Sequence Learning With Neural Networks To Perform Number Addition
  4. Image Classification Model MobileNet V2 from TensorFlow Hub
  5. Step by Step Intent Recognition With BERT
  6. Sentiment Analysis for Hotel Reviews With NLTK and Keras
  7. Simple Sequence Prediction With LSTM
  8. Image Classification With ResNet50 Model
  9. Predict Amazon Inc Stock Price with Machine Learning
  10. Predict Diabetes With Machine Learning Algorithms
  11. TensorFlow Build Custom Convolutional Neural Network With MNIST Dataset
  12. Deal Banking Marketing Campaign Dataset With Machine Learning

PySpark

  1. How to Parallelize and Distribute Collection in PySpark
  2. Role of StringIndexer and Pipelines in PySpark ML Feature - Part 1
  3. Role of OneHotEncoder and Pipelines in PySpark ML Feature - Part 2
  4. Feature Transformer VectorAssembler in PySpark ML Feature - Part 3
  5. Logistic Regression in PySpark (ML Feature) with Breast Cancer Data Set

PyTorch

  1. Build the Neural Network with PyTorch
  2. Image Classification with PyTorch
  3. Twitter Sentiment Classification In PyTorch
  4. Training an Image Classifier in Pytorch

Natural Language Processing

  1. Spelling Correction Of The Text Data In Natural Language Processing
  2. Handling Text For Machine Learning
  3. Extracting Text From PDF File in Python Using PyPDF2
  4. How to Collect Data Using Twitter API V2 For Natural Language Processing
  5. Converting Text to Features in Natural Language Processing
  6. Extract A Noun Phrase For A Sentence In Natural Language Processing