AI summary
Overview: The article describes Logical Volume Manager (LVM) on Linux, explaining its three-tier model of physical volumes, volume groups, and logical volumes, and outlining the operational workflow for provisioning, filesystem creation, mounting, online expansion, snapshotting, data migration, and recovery across common enterprise distributions and virtualized environments.
Core message: LVM decouples logical storage from physical devices to enable dynamic resizing, live migration, thin provisioning, and point-in-time snapshots with minimal disruption; achieving reliable results requires applying proven practices—proper device preparation and alignment, careful choice of extent and stripe sizes, filesystem selection, UUID-based mounts, active monitoring of free extents and thin pools, short-lived snapshots, and rehearsed migration procedures.
Learn how to create an LVM partition in Linux using industry-standard tools and best practices for modern server environments. This step-by-step guide covers physical volumes, volume groups, logical volumes, filesystem creation, mounting, live storage expansion, and advanced LVM management techniques for Ubuntu, RHEL, VPS, and enterprise Linux systems. Whether you are deploying scalable storage for production workloads or optimizing server infrastructure, this tutorial provides the commands, examples, and expert tips needed to manage Linux storage efficiently with LVM.
How to Create an LVM Partition in Linux
Logical Volume Manager (LVM) is one of the most consequential storage technologies in the Linux ecosystem. By decoupling logical storage from the constraints of physical hardware, LVM enables administrators to resize volumes online, migrate data across disks, take point-in-time snapshots, and scale capacity on demand all without service interruption. This guide covers the full operational lifecycle of LVM: from architecture fundamentals through provisioning, expansion, monitoring, and recovery.
Architecture Deep Dive
Understanding the three-tier storage abstraction is a prerequisite to effective LVM administration.
Physical Volumes (PV)
A physical volume is any block device initialized for LVM use: a raw disk (/dev/sdb), a partition (/dev/sdb1), an NVMe namespace (/dev/nvme1n1), or even a device-mapper target such as a hardware RAID array. The pvcreate command writes LVM metadata to the device’s first and last sectors, creating a small metadata area (MDA) that stores the volume group descriptor.
Each PV is divided into Physical Extents (PE) — the fundamental allocation unit. The default PE size is 4 MiB; for large volumes (>1 TiB), increasing PE size reduces metadata overhead and can improve performance on certain workloads.
Volume Groups (VG)
A volume group pools one or more physical volumes into a single administrative domain. The VG presents a unified, contiguous address space measured in physical extents. All metadata — including the PE-to-LE mapping, snapshot relationships, and thin pool allocation — is replicated across the MDA of every PV in the group, providing redundancy at the metadata level.
Volume groups support tags (key-value labels) for scripted administration and activation controls for selective import in clustered environments.
Logical Volumes (LV)
Logical volumes are the consumer-facing layer. They behave identically to block devices (/dev/mapper/vg_data-lv_web) and are addressed in Logical Extents (LE). The LVM kernel module (device-mapper) maps each LE to one or more PEs at runtime, enabling the flexible layouts that define LVM’s value: linear, striped, mirrored, and thin-provisioned.
Pre-Provisioning: Disk Identification and Preparation
Enumerate Block Devices
lsblk -o NAME,MAJ:MIN,SIZE,TYPE,FSTYPE,MOUNTPOINT,UUID
This extended output reveals filesystem signatures and existing mount points, preventing accidental data loss.
# Check for existing partition tables or filesystem signatures
sudo wipefs –all –no-act /dev/sdb # dry-run; remove –no-act to execute
sudo blkid /dev/sdb
Warning: wipefs destroys all filesystem and partition metadata on the target device. Verify the target device path before execution. On production systems, use –no-act first to preview what would be removed.
Validate Device Health Before Use
Do not provision LVM on a degraded device. Run a quick SMART check:
sudo smartctl -H /dev/sdb
sudo smartctl -a /dev/sdb | grep -E “Reallocated|Pending|Uncorrectable”
Any non-zero count in those attributes should disqualify the device from production use.
Step 1 — Partition the Disk
While LVM can operate on raw disks (/dev/sdb directly), partitioning is best practice: it communicates intent to other tools, preserves alignment, and avoids conflicts with firmware that inspects partition tables.
Using parted (Recommended for Scripting)
sudo parted /dev/sdb –script \
mklabel gpt \
mkpart primary 1MiB 100% \
set 1 lvm on
sudo partprobe /dev/sdb
The 1MiB start offset ensures 4K alignment, which is critical for SSDs, NVMe devices, and modern HDDs with 4K physical sectors. A GPT label is preferable to MBR for devices larger than 2 TiB and for UEFI systems.
Using fdisk (Interactive)
sudo fdisk /dev/sdb
Inside fdisk:
g # create GPT partition table
n # new partition (accept defaults for full-disk)
t # change type
44 # “Linux LVM” type GUID on GPT (type 8e on MBR)
w # write and exit
Verify alignment after creation:
sudo parted /dev/sdb align-check optimal 1

Step 2 — Initialize the Physical Volume
sudo pvcreate –metadatasize 250k –dataalignment 1m /dev/sdb1
Key options:
- –metadatasize controls the size of the metadata area. Increase for environments with hundreds of logical volumes or snapshots.
- –dataalignment aligns the first PE boundary. Match this to your storage’s optimal I/O size (1 MiB is safe for most hardware; check cat /sys/block/sdb/queue/optimal_io_size).
Inspect the result:
sudo pvdisplay –units m /dev/sdb1
sudo pvs -o +pv_mda_count,pv_mda_free /dev/sdb1
Step 3 — Create the Volume Group
sudo vgcreate –physicalextentsize 4M vg_data /dev/sdb1
For VGs that will eventually exceed 1 TiB of total PV capacity, consider increasing PE size to 16 MiB or 32 MiB. Note that changing PE size after VG creation is not supported; it requires recreating the VG.
Verify:
sudo vgdisplay vg_data
sudo vgs -o +vg_mda_count,vg_mda_free
Check the free extent count — this is the operational budget for new LV allocations.
Step 4 — Create Logical Volumes
Fixed-Size Linear Volume
sudo lvcreate –size 20G –name lv_web vg_data
Allocate All Remaining Space
sudo lvcreate –extents 100%FREE –name lv_backup vg_data
Striped Volume (Improved Throughput)
When a VG spans multiple PVs, striping distributes I/O across them, increasing sequential throughput:
sudo lvcreate –size 40G –name lv_db –stripes 2 –stripesize 64K vg_data
Stripes must equal the number of PVs available. Stripe size should match the workload’s I/O pattern: 64K–256K for databases, 512K–1M for large sequential workloads.
Thin-Provisioned Volume
Thin provisioning allows LVs to present more capacity than is physically allocated, deferring actual block allocation until data is written. This is ideal for VM disk images and development environments.
# Create a thin pool occupying 80% of the VG
sudo lvcreate –type thin-pool –size 40G –name lv_thinpool vg_data
# Create a thin volume from the pool (can be larger than pool)
sudo lvcreate –type thin –thinpool lv_thinpool –virtualsize 100G –name lv_vm01 vg_data
Monitor thin pool usage carefully — overcommitment without monitoring leads to pool exhaustion and I/O errors:
sudo lvs -o +data_percent,metadata_percent vg_data/lv_thinpool
Verify LV creation:
sudo lvs –segments vg_data
Step 5 — Create and Mount the Filesystem
ext4
sudo mkfs.ext4 -L webdata -E lazy_itable_init=0,lazy_journal_init=0 /dev/vg_data/lv_web
The -E lazy_itable_init=0,lazy_journal_init=0 flags force full inode table initialization at mkfs time, eliminating the background init process that can affect initial I/O performance on busy systems.
XFS
sudo mkfs.xfs -L webdata -f /dev/vg_data/lv_web
XFS is strongly recommended for: databases, large file repositories, high-concurrency workloads, and any volume exceeding ~1 TiB. It supports online growing (but not shrinking), has excellent parallelism, and handles large directory operations efficiently.
Mount
sudo mkdir -p /mnt/webdata
sudo mount -o defaults,noatime /dev/vg_data/lv_web /mnt/webdata
The noatime option suppresses access time updates on reads, meaningfully reducing write amplification on read-heavy workloads.
Step 6 — Persistent Mounting via /etc/fstab
Never use device paths in /etc/fstab for LVM volumes. Use UUIDs (stable across renames) or the device-mapper name.
sudo blkid -s UUID -o value /dev/vg_data/lv_web
Edit /etc/fstab:
UUID=3a4b5c6d-xxxx-xxxx-xxxx-xxxxxxxxxxxx /mnt/webdata ext4 defaults,noatime 0 2
For XFS volumes, the fsck pass should be 0 (XFS has its own journal recovery; fsck is not used at boot):
UUID=3a4b5c6d-xxxx-xxxx-xxxx-xxxxxxxxxxxx /mnt/webdata xfs defaults,noatime 0 0
Validate without rebooting:
sudo mount -a && df -hT /mnt/webdata

Advanced Operations
Online Logical Volume Extension
One of LVM’s primary operational advantages is online resizing no unmounting required for most filesystems.
# Extend the LV by 10 GiB
sudo lvextend -L +10G /dev/vg_data/lv_web
# Grow the filesystem to fill the new LV space
sudo resize2fs /dev/vg_data/lv_web # ext4
sudo xfs_growfs /mnt/webdata # XFS (requires mount point, not device)
Both operations can be combined for ext4 using lvextend –resizefs:
sudo lvextend –resizefs -L +10G /dev/vg_data/lv_web
Note: XFS volumes cannot be shrunk. If you anticipate needing to reclaim space, use ext4 or thin provisioning instead.
Reduce a Logical Volume (ext4 Only)
Shrinking requires unmounting:
sudo umount /mnt/webdata
sudo e2fsck -f /dev/vg_data/lv_web
sudo resize2fs /dev/vg_data/lv_web 15G
sudo lvreduce -L 15G /dev/vg_data/lv_web
sudo mount /mnt/webdata
Always run e2fsck before reducing. Never reduce the LV to less than the filesystem’s occupied space.
Expand a Volume Group with a New Disk
# Initialize the new device
sudo pvcreate /dev/sdc1
# Add it to the existing VG
sudo vgextend vg_data /dev/sdc1
# Verify available extents
sudo vgs
No downtime or unmounting is required. New capacity is immediately available for LV provisioning or extension.
Migrate Data Between Physical Volumes (Online pvmove)
pvmove relocates physical extents from one PV to another while the filesystem remains mounted. This is the standard procedure for decommissioning a disk without downtime:
# Move all extents off /dev/sdb1 onto remaining PVs in the VG
sudo pvmove /dev/sdb1
# Or target a specific LV
sudo pvmove –name lv_web /dev/sdb1 /dev/sdc1
pvmove is resumable: if interrupted, re-run the same command to continue from where it left off. Once complete:
sudo vgreduce vg_data /dev/sdb1
sudo pvremove /dev/sdb1
LVM Snapshots
Snapshots capture a point-in-time consistent view of a logical volume using copy-on-write semantics. They are essential for online backups.
# Create a 5 GiB snapshot of lv_web
sudo lvcreate –snapshot –size 5G –name lv_web_snap /dev/vg_data/lv_web
The snapshot only requires space proportional to the data that changes after the snapshot is taken, not the full volume size.
Mount read-only for backup:
sudo mount -o ro /dev/vg_data/lv_web_snap /mnt/snap
rsync -aAX /mnt/snap/ /backup/webdata/
sudo umount /mnt/snap
sudo lvremove /dev/vg_data/lv_web_snap
Monitor snapshot usage if a snapshot’s CoW space fills before it is removed, it is automatically invalidated:
sudo lvs -o +snap_percent vg_data
Keep snapshots short-lived. Accumulating large amounts of changed data in a snapshot degrades the performance of the origin volume due to the overhead of CoW metadata lookups.
Essential Reference Commands
| Operation | Command |
| List all PVs | sudo pvs -o+pv_mda_count |
| List all VGs | sudo vgs -o+vg_free |
| List all LVs | sudo lvs -o+seg_type,stripes |
| Detailed PV info | sudo pvdisplay /dev/sdb1 |
| Detailed VG info | sudo vgdisplay vg_data |
| Detailed LV info | sudo lvdisplay /dev/vg_data/lv_web |
| Activate all VGs | sudo vgchange -ay |
| Scan for PVs | sudo pvscan –cache |
| Remove a LV | sudo lvremove /dev/vg_data/lv_web |
| Remove a VG | sudo vgremove vg_data |
| Remove a PV | sudo pvremove /dev/sdb1 |
| Export a VG | sudo vgexport vg_data |
| Import a VG | sudo vgimport vg_data |
| Rename a VG | sudo vgrename vg_data vg_production |
| Rename a LV | sudo lvrename vg_data lv_web lv_frontend |
Production Best Practices
Use UUIDs everywhere. Device paths change under udev renames, multipath reconfiguration, and kernel upgrades. UUIDs do not.
Separate data domains into dedicated LVs. A monolithic LV for all application data creates operational risk: a runaway process filling the filesystem affects all services on it. Separate volumes for databases, logs, application data, and container storage allow independent sizing, snapshotting, and filesystem tuning.
Prefer XFS for large or high-concurrency workloads. XFS’s delayed allocation, speculative preallocation, and scalable B-tree structures outperform ext4 under heavy parallel I/O. ext4 remains appropriate for small volumes, boot partitions, and use cases requiring shrink support.
Monitor PE free space with alerting. A VG with zero free extents will cause all write operations on its LVs to fail. Configure alerting at the VG level:
# Script-friendly free space query (outputs percentage)
sudo vgs –noheadings –units g -o vg_free vg_data | tr -d ‘ g’
Thin pool monitoring is mandatory. Thin-provisioned pools require active monitoring. Configure lvm2-monitor.service (part of the lvm2 package) and set thin_pool_autoextend_threshold and thin_pool_autoextend_percent in /etc/lvm/lvm.conf for automatic pool extension:
thin_pool_autoextend_threshold = 80
thin_pool_autoextend_percent = 20
Avoid long-lived snapshots. Snapshots are operationally useful for backup windows, but every write to the origin LV during the snapshot’s lifetime incurs a CoW penalty. In high-write environments, this overhead compounds rapidly.
Test pvmove before decommissioning disks. Rehearse the migration process on non-critical LVs first. On large volumes, pvmove can take hours; verify that pvmove progress can be monitored and that your maintenance window accounts for the data volume.
Troubleshooting
VG or PV Not Found After Reboot
sudo pvscan –cache
sudo vgchange -ay
sudo lvchange -ay /dev/vg_data/lv_web
If still missing, inspect dmesg and /var/log/syslog for device-mapper or udev errors. Confirm the underlying block device is present with lsblk.
Volume Group Metadata Corruption
LVM stores metadata on all PVs in the VG. If one PV’s metadata is corrupted:
sudo vgck vg_data # check for inconsistencies
sudo vgck –updatemetadata vg_data # restore consistent metadata from a healthy PV
Use –partial activation as a last resort to access data from surviving PVs when one has failed:
sudo vgchange -ay –partial vg_data
resize2fs Fails After lvextend
Confirm the LV was actually extended:
sudo lvdisplay /dev/vg_data/lv_web | grep “LV Size”
Ensure e2fsck is not reporting errors before resize:
sudo e2fsck -n /dev/vg_data/lv_web
Thin Pool Full / LV Deactivated
If a thin pool runs out of space, thin volumes are suspended. Recover by extending the pool:
sudo lvextend -L +10G vg_data/lv_thinpool
Thin volumes will automatically resume.

Rescan After Adding Disks (No Reboot)
sudo partprobe
echo “- – -” | sudo tee /sys/class/scsi_host/host*/scan
sudo pvscan –cache
LVM transforms Linux storage administration from a static, partition-bound discipline into a dynamic, policy-driven one. The primitives — physical volumes, volume groups, logical volumes — compose into powerful abstractions: online resizing, live data migration, thin provisioning, and point-in-time snapshots. Mastering these tools is non-negotiable for administrators responsible for production database servers, virtualization hosts, container platforms, and any environment where storage demands evolve faster than hardware procurement cycles.
The configurations documented here are operationally proven across enterprise Linux deployments on bare-metal, VPS, and cloud-attached block storage. Adapt PE size, stripe configuration, and thin pool thresholds to your specific workload characteristics — there is no universally optimal configuration, but the principles above provide a sound baseline from which to tune.
LVM provides flexible, scalable, and enterprise-grade storage management for Linux systems. By separating logical storage from physical hardware, administrators gain the ability to resize volumes dynamically, expand storage pools, and optimize disk management with minimal downtime.
Whether you manage VPS infrastructure, dedicated servers, or cloud environments, mastering LVM is essential for modern Linux administration.
At Advanced Hosting, our Linux VPS and dedicated server solutions provide the performance, scalability, and infrastructure flexibility required for advanced LVM storage configurations, enterprise workloads, and high-availability deployments.
Can LVM span multiple physical disks?
Yes. One of the biggest advantages of LVM is the ability to aggregate multiple disks into a single volume group. This allows administrators to combine storage devices of different sizes into one logical storage pool, simplifying capacity management and future expansion.
Does LVM affect system performance?
In most production environments, LVM introduces minimal overhead. Modern Linux kernels optimize LVM efficiently, and performance differences are usually negligible for typical workloads. However, heavily layered configurations involving snapshots, encryption, RAID, and thin provisioning may increase I/O latency.
Is LVM supported in virtualized environments?
Absolutely. LVM is widely used in:
- KVM
- VMware
- Hyper-V
- Proxmox
- OpenStack
- Cloud VPS platforms
It is especially valuable in virtualized infrastructure because it allows flexible storage resizing and simplified migration workflows.
Can I use LVM with SSDs and NVMe drives?
Yes. LVM works extremely well with SSD and NVMe storage devices. In enterprise environments, NVMe-backed LVM configurations are commonly used for:
- Databases
- Container platforms
- Virtual machines
- High-performance caching layers
For SSD optimization, ensure TRIM support is enabled:
sudo fstrim -av
What is LVM thin provisioning?
Thin provisioning allows logical volumes to consume physical space only when data is actually written. This enables overprovisioning and more efficient storage allocation.
Thin pools are particularly useful for:
- Virtual machine hosting
- Container environments
- Snapshot-heavy workloads
- Large-scale cloud infrastructure
Example thin pool creation:
sudo lvcreate –type thin-pool -L 100G -n thinpool vg_data
Can encrypted storage be combined with LVM?
Yes. LVM is commonly integrated with LUKS encryption for secure storage deployments.
Typical enterprise architecture:
Disk → LUKS Encryption → LVM → Filesystem
This approach provides:
- Full-disk encryption
- Flexible logical volume management
- Secure multi-volume storage layouts
Are LVM snapshots suitable for backups?
LVM snapshots are useful for short-term recovery points and consistent backups, especially for databases and active filesystems. However, they should not replace full backup strategies.
Long-lived snapshots may negatively impact performance because changed blocks must be tracked continuously.
Can I migrate an existing standard partition into LVM?
Yes, but the process is more advanced and typically involves:
- Booting into rescue mode
- Shrinking existing partitions
- Creating new physical volumes
- Copying data
- Reconfiguring bootloaders
In production systems, administrators usually perform migrations during scheduled maintenance windows.
Is LVM required for Linux servers?
No, but it is strongly recommended for most server environments because it provides scalability and operational flexibility. Traditional partitions may still be acceptable for:
- Minimal embedded systems
- Simple appliances
- Immutable container hosts
For VPS, dedicated servers, and enterprise Linux deployments, LVM is generally considered best practice.