Python Object Serialization and Deserialization With Pickle

Python Object Serialization and Deserialization With Pickle

In this blog, we will serialize list, dictionary, method and class ojbect with pickle module. After that we will deserialize them also.

What is pickle?

The pickle module implements binary protocols for serializing and de-serializing a Python object structure.

"Pickling" is the process whereby a Python object hierarchy is converted into a byte stream, and "unpickling" is the inverse operation, whereby a byte stream (from a binary file or bytes-like object) is converted back into an object hierarchy.

Pickling and unpickling are known as "serialization" and "deserialization".

To serialize an object, simply we have to call the dumps() function. Similarly, to de-serialize a data stream, we have to call the loads() function.

The pickle module keeps track of the objects it has already serialized, so that later references to the same object won’t be serialized again.

What can be pickled and unpickled?

The following types can be pickled:

  1. None, True, and False
  2. integers, floating point numbers, complex numbers
  3. strings, bytes, bytearrays
  4. tuples, lists, sets, and dictionaries containing only picklable objects
  5. functions defined at the top level of a module (using def, not lambda)
  6. built-in functions defined at the top level of a module
  7. classes that are defined at the top level of a module

Let us see examples of serialization

Import module

In [1]:
import pickle
dir(pickle)

Serialize dictionary

In [18]:
dict1 = {1: "John", 2: "Martha", 3:"Monalisa", 4: "Priyanka", 5: "Sheron"}
dict1
Out[18]:
{1: 'John', 2: 'Martha', 3: 'Monalisa', 4: 'Priyanka', 5: 'Sheron'}

Open file

open(file, mode='r', buffering=- 1, encoding=None, errors=None, newline=None, closefd=True, opener=None)

Open file and return a corresponding file object. If the file cannot be opened, an OSError is raised.


mode is an optional string that specifies the mode in which the file is opened. It defaults to 'r' which means open for reading in text mode.

The available modes are:

'r': open for reading (default)

'w': open for writing, truncating the file first

'x': open for exclusive creation, failing if the file already exists

'a': open for writing, appending to the end of file if it exists

'b': binary mode

't': text mode (default)

'+': open for updating (reading and writing)

In [19]:
#open the file with mode 'wb'
pickle_out = open("dict.pickle","wb")

You can see dict.pickle file created in current directory.

Dump the object

pickle.dump(obj, file, protocol=None, *, fix_imports=True, buffer_callback=None)

Write the pickled representation of the object obj to the open file object file.
In [20]:
#dump the dictionary object
pickle.dump(dict1, pickle_out)

Close the file

In [21]:
pickle_out.close()

Serialize a list

In [10]:
list1 = ["a", "b", "c", "d", "e", "f"]
list1
Out[10]:
['a', 'b', 'c', 'd', 'e', 'f']
In [11]:
pickle_out_list = open("list.pickle","wb")
pickle.dump(list1, pickle_out_list)
pickle_out_list.close()

Serialize a method

In [28]:
def add(x, y):
    return(f"Addition of {x} & {y} = {x + y}")
    pass
In [29]:
a1 = add(2, 5)
print(a1)
Addition of 2 & 5 = 7

File open and dump object

In [31]:
pickle_out_obj1 = open("obj1.pickle","wb")
pickle.dump(a1, pickle_out_obj1)
pickle_out_obj1.close()

Serialize a class

Create a simple class called Blog

In [32]:
class Blog:
    def __init__(self, id, title, author):
        self.id = id
        self.title = title
        self.author = author
        pass
    
    def __str__(self):
        s = f'Id: {self.id}\nTitle: {self.title}\nAuthor: {self.author}'
        return s
        pass
    pass
In [33]:
blog = Blog(1, "How to read file in python", "Nutan")
print(blog)
Id: 1
Title: How to read file in python
Author: Nutan

Open file and dump blog object

In [34]:
pickle_out_obj2 = open("obj2.pickle","wb")
pickle.dump(blog, pickle_out_obj2)
pickle_out_obj2.close()

We have serialized dictionary, list, method and class. Let us deserialize all of them.

Deserialize dictionary with pickle.load() method

Open pickled file

In [38]:
#open the file with mode 'rb'
dict_pickle_in = open("dict.pickle","rb")

load pickled file

pickle.load(file, *, fix_imports=True, encoding='ASCII', errors='strict', buffers=None)

Read the pickled representation of an object from the open file object file and return the reconstituted object hierarchy specified therein.
In [39]:
#load the object
dict1 = pickle.load(dict_pickle_in)

print(dict1)
{1: 'John', 2: 'Martha', 3: 'Monalisa', 4: 'Priyanka', 5: 'Sheron'}

Deserialize list

In [41]:
list_pickle_in = open("list.pickle","rb")
list1 = pickle.load(list_pickle_in)

print(list1)
['a', 'b', 'c', 'd', 'e', 'f']

Deserialize method

In [43]:
obj1_pickle_in = open("obj1.pickle","rb")
a1 = pickle.load(obj1_pickle_in)

print(a1)
Addition of 2 & 5 = 7

Deserialize class

In [45]:
obj2_pickle_in = open("obj2.pickle","rb")
blog1 = pickle.load(obj2_pickle_in)

print(blog1)
Id: 1
Title: How to read file in python
Author: Nutan
In [ ]:
 
In [ ]:
 

Machine Learning

  1. Deal Banking Marketing Campaign Dataset With Machine Learning

TensorFlow

  1. Difference Between Scalar, Vector, Matrix and Tensor
  2. TensorFlow Deep Learning Model With IRIS Dataset
  3. Sequence to Sequence Learning With Neural Networks To Perform Number Addition
  4. Image Classification Model MobileNet V2 from TensorFlow Hub
  5. Step by Step Intent Recognition With BERT
  6. Sentiment Analysis for Hotel Reviews With NLTK and Keras
  7. Simple Sequence Prediction With LSTM
  8. Image Classification With ResNet50 Model
  9. Predict Amazon Inc Stock Price with Machine Learning
  10. Predict Diabetes With Machine Learning Algorithms
  11. TensorFlow Build Custom Convolutional Neural Network With MNIST Dataset
  12. Deal Banking Marketing Campaign Dataset With Machine Learning

PySpark

  1. How to Parallelize and Distribute Collection in PySpark
  2. Role of StringIndexer and Pipelines in PySpark ML Feature - Part 1
  3. Role of OneHotEncoder and Pipelines in PySpark ML Feature - Part 2
  4. Feature Transformer VectorAssembler in PySpark ML Feature - Part 3
  5. Logistic Regression in PySpark (ML Feature) with Breast Cancer Data Set

PyTorch

  1. Build the Neural Network with PyTorch
  2. Image Classification with PyTorch
  3. Twitter Sentiment Classification In PyTorch
  4. Training an Image Classifier in Pytorch

Natural Language Processing

  1. Spelling Correction Of The Text Data In Natural Language Processing
  2. Handling Text For Machine Learning
  3. Extracting Text From PDF File in Python Using PyPDF2
  4. How to Collect Data Using Twitter API V2 For Natural Language Processing
  5. Converting Text to Features in Natural Language Processing
  6. Extract A Noun Phrase For A Sentence In Natural Language Processing