Integrated LVM with Hadoop

Ishika Mandloi
3 min readNov 3, 2020

What is LVM?

LVM stands for Logical volume management. Using LVM, we can allocate many hard drives to one or more physical volumes. These physical volumes combine and form volume group and act as a new hard drive. And we can create partition in this volume group, that will be known as logical volume.

volume group don’t exist physically but, works same as hard drive. We can extend or reduce these logical volumes as per our needs.

Working in LVM

We will integrate LVM with Hadoop to provide elastic storage to data node, which will then contribute this storage to namenode.

So, lets start !

We have two hard disks /dev/sdb of 3 Gib and /dev/sdc of 5 GiB

Now, we will create physical volumes from these hard disks

command to create physical volume is:

#pvcreate /dev/sdc

#pvcreate /dev/sdb

create physical volume of second hard disk also with same command, and check using command:

#pvdisplay <hard disk name>

Now, as our physical volumes are created we will create volume group of both physical volumes using command:

#vgcreate <name> /dev/sdb /dev/sdc

After creating check using command:

#vgdisplay <name>

Now, we can use volume group as storage device and now we will create logical volumes.

command to create logical volumes is:

#lvcreate — size <value> — name <name of lv> <name of vg>

we can check using command:

#lvdisplay <name of vg>/<name of lv>

As normally when we create partition we format it and mount it to folder to make it usable. Same way we have to format logical volume and mount to folder to make it usable.

to format logical volume we can use mkfs command

#mkfs.ext4 <lv path>

Now, we have to mount it using command:

#mount <lv path><folder name>

Now, to integrate it with Hadoop, give this folder in hdfs configuration file of Hadoop cluster.

You can see it contributing only 2.89 GB to cluster

  1. We can increase storage contribution by increasing the size of lv .

There are two step for increasing lv

a. extending it

b. resizeing it

To extend we have command:

#lvextend — size<value> <lv path>

To resize we have command:

#resize2fs <lv path>

You can see after increasing lv size in contributing 4.86 GB to cluster.

2. We can reduce the storage contribution of datanode by reducing lv size.

command to reduce l

--

--