Chapter 5 – Hardware and Disks


  • Determining hardware
  • Testing hardware
  • The role of the kernel
  • Disk configuration on Linux
  • The filesystem hierarchy
  • Configuring a blank disk and mounting it
  • Re-configuring a disk using LVM
  • Using systemd-mount and fstab
  • Disk encryption and working with encryption at rest
  • Current filesystem formats
  • Upcoming filesystem formats

1. Determining hardware

We will use few different methods for determining the hardware that runs on the system.

# yum install -y pciutils usbutils


# lspci
  • Shows divides in our system.
  • They have ID
# lspci -n
  • List only ID of devices mounted on a system
# lspci -k
  • List kernel drive that is handling the device


# lsusb
  • List USB devices.


  • Outputs hardware tree as JSON, XML, HTML.
  • Output is very verbose and too detailed.
  • Use -short for more concise output
# lshw -c network
  • List network part of entire lshw output
  • -c means select class and then specify name to output just that class snip.
# lshw -c network -numeric
  • Same as above, but to show ID too


  • Process information pseudo-filesystem
  • Found on most Linux systems
  • Allows on the fly changes to be made to the running kernel

Useful /proc files:

# cat /proc/cpuinfo
# cat /proc/meminfo


  • It is pseudo-filesystem
  • It is temporary (tmpfs)
  • After reboot, files are gone forever

2. Testing hardware

  • Methods for testing potentially faulty hardware looking at SMART and disk-testing software
  • Physically troubleshooting RAM issues
  • Always make sure there is backup enabled and files are safe. 
# yum install -y smartmontools hdparm

Checking disk health

  • Make sure smartd is running on the system!
  • smartd is monitoring daemon for SMART.
  • smartd attempts to enable monitoring on ATA devices when daemon starts, and checks the ATA devices every 30 minutes.
  • Errors detected by smard daemon are logged using the SYSLOG interface.
  • The log errors can be automated and sent to admin periodically.
# smartctl -a /dev/sda


  • If disks are reporting back fine to your SMART commands, there is still possibility they cause a slowdown or other issues
  • Benchmark disk’s read with the hdparm tool
  • Run the test 3 times to see the average results.

Test disk speeds:

# hdparm -tT /dev/sda

Memory testing:

  • The most thorough way to test memory is to take off DIMMs for a few hours while you run Memtest86+
  • Programs such as memtester also exist. However, they wont test memory in use and they might end up fighting processes such as the Out Of Memory (OOM) killer.
  • You can download Memtest86+ from their website, mount to VM, and boot into program (independent of CentOS)
  • Any errors will show at the bottom of the screen. Therefore, we know memory is bad.

If you find system won’t boot at all, try to remove half of the DIMM sticks out and boot the system. If it does not boot, remove half others, and continue until you have two RAMs. Continue until system is unable to properly boot, therefore you know some left out are bad.

3. The role of the kernel

  • Watch kernel running and look which modules are loaded by the time we get to the OS

Timeline of events:

  1. Kernel extracts itself and loads itself, before handing over control to the init system (systemd)
  2. While kernel loads, it detects hardware and adds appropriate modules so that hardware can correctly work and be managed.

We need to watch our system boot

Disable the quiet option in our boot configuration (so we can see verbose information at boot)

# sed -i ‘s/ quiet//g’ /etc/sysconfig/grub
# grub2-mkconfig | tee /boot/grub2/grub.cfg
# reboot

We now have system information at boot:

Everything above is saved and can be looked with dmesg command

See which modules kernel has loaded to deal with our hardware:

# lsmod
  • Some of these modules are obvious, some are not.
  • lsmod actually just prints /proc/modules in a more readable format.

See specific information about each module:

# modinfo nf_tables_set
# modinfo modulename

You cannot remove module in use (not allowed)

# modprobe -r modulename

Load new module:

  • We can dynamically load and unload modules
  • We can manually blacklist modules from loading at all

Load new module (maybe to test it)

#modprobe modulename

See our new loaded module

# lsmod | grep modulename

Blacklist a module:

# echo “blacklist modulename” | tee -a /etc/modprobe.d/blacklist.conf

Ensure new module starts with the rest of our kernel:

# echo “modulename” | tee /etc/modules-load.d/modulename.conf
# reboot
# lsmod 
  • Check if the module loaded

4. Disk configuration on Linux

  • Differences between: vda, sda, hda, nvme
  • Difference between disks, virtual disks, partitions, file systems

Listing disks with lsblk:

  • lsblk shows tree view of system’s block devices, their logical separations, and mount points.
  • A block device is layer of abstraction on top of a storage medium
# yum install lvm2 -y

In this example, we have:

  • Physical disks: sda, sdb, sdc
  • Partitions: sda1, sda2, sda3
  • Volume group: VolGroup00
  • Logical volumes on top of our singular volume group: LogVol00, LogVol01
  • Mount points: /boot, /, [SWAP]

Listing mount points with df:

# df -h
  • We see mount points /boot and /
  • Other mount points are devtmpfs and tmpfs filesystems
  • These temporary mount points are mounted on top of RAM disks.
  • Mostly, mount points we are concerned about are those that are not temporary, who store files permanently.

Listing filesystems with df:

  • / mount point is formatted as XFS
  • Here / and /boot are formatted as XFS
  • CentOS and Red Hat prefer to use XFS
  • It is not uncommon to see systems use ext4, ext3, ext2, btrfs, zfs.

5. Listing logical volume manager disks, volume groups, logical volumes:

  • Let’s see the layout of disks that are being handled by LVM

Physical disks

  • Let’s see which physical volumes LVM is aware of with pvs
# pvs 
  • CMD more like
# pvdisplay
  • Easier to read
  • We can see LVM is aware of sda2 partition that lives on top of sda hdd
  • LVM can be entire device (sda) or a partition on that device (sda3)

Volume groups:

  • Volume group is when you have more than one physical volume grouped in a volume group.
  • This allows for flexibility in terms of the logical volumes that live on the top.
  • vgs for all volume groups disk is aware of.
  • vgdisplay prints information about volume groups.

#LV – number of logical volumes for this volume group

Logical volumes:

  • In LVM stack, we also have logical volumes
  • Logical volume is logical device that filesystems get applied to
# lvs
  • There are two logical volumes
  • One is under our /, second is our swap space
# lvdisplay

Listing swap

  • Swap is more special, slow, extended, annoying memory than it is disk space actually.
  • Swap is used when your system’s RAM is full, and therefore kernel starts to fill memory onto the disk, where it can be slow.
# swapon --show
  • We see that /dev/dm-1 is our swap device
  • Dm-1 is a low-level representation of our logical volume

How everything works

  • Physically, we have a disk. It can be hard disk with spinning platter, or it can be solid state drive (SSD) which can be SATA or NVMe with M.2 connector. Whatever type of disk, it is used for storage.
  • To store files on a disk, it needs to have following things: 
  1. It needs to be readable by OS and this is handled by the kernel. If kernel determines the disk to be IDE drive (uncommon), it will show it as hda device. If the disk is SATA or SCI, it will show it as sda device. If it is virtual disk, it will list it as vda. Disk lettering is sequential (sda, sdb, sdc, sdd, sde…)
  2. After the OS recognizes that disk exists, it checks for partitions and filesystems. Partitions are a segment of a disk, and filesystems are recipe for how files are read and written to the drive.
  • We talked about lsblk command used to query the sysfs filesystem and udev database.
  • We talked about mount points and filesystems. 
  • Mount points are the area of Linux hierarchy to which disks are assigned. 

6. The filesystem hierarchy


Contains executable programs which are needed in a single user mode. They also bring the system up, or repair it


Contains add-on third party packages that contain static files.


Contains binaries for games and educational programs (optional)


Contains site-specific data served by the system


Contains static files for the boot loader. Holds the files which are needed during the boot process. 


Contains special or device files (refer to physical devices)


Contains configuration files


Contains configuration files for add-on third party applications installed in /opt.


When a new user account is created, files from this directory are copied into the user’s home directory


Contains shared libraries that are necessary to boot the system and run commands in root filesystem.


Contains loadable kernel modules


Contains items lost in the filesystem (chunks of files mangled because of faulty disk or system crash)


Contains mount points for removable media (CD, DVD, USB)


Mount point for temporarily mounted filesystem. In some distros, /mnt contains subdirectories intended to be used as mount points for several temporary filesystem (recursive mount point filesystems)


Mount point for the proc filesystem. Proc filesystem provides information about running processes and kernel. Also called pseudo-filesystem.


Home directory for the root user


Contains information that describes the system since it was booted. Some programs use /var/run and store this information.


Like /bin, this directory contains commands needed to boot the system. Usually not executed by normal users


Mount point for the sysfs filesystem. Sysfs provides information about the kernel like /proc, but better structured.


Contains temporary files. Files may be deleted with no notice


Directory usually mounted from a separate partition. Holds only shareable, read-only data, so that it can be mounted by various machines running Linux


Primary directory for executable programs. Most programs executed by normal users which are not needed for booting should be stored in this directory.


Configuration files to be shared between several machines. 


Contains files for the C compiler


Contains files which may change in size (spool and log files)

7. Configuring a blank disk and mounting it

  • We will use CLI tools to partition and format one of our disks (without LVM)
  • Talk about GPT and MBR

Step 1: Create a partition

# fdisk /dev/sda
  • Choose disk (in this case /dev/sda)
  • We will be dropped into a different shell (fdisk shell)

Create a GPT disklabel (press g)(g for GPT)

Create new partition (press n)(n for new)

Write changes to disk (press w)(w for write)

Step 2: Format a partition:

# mkfs.ext4 /dev/sdb1
  • Format partition as ext4
# mkfs -t xfs /dev/sdb2
  • Format partition as xfs
# blkid /dev/sdb1
  • Shows UUID and Type of partition
# blkid /dev/sdb2

Step 3: Copy files on other location before mounting new partition:

  • Good practice to copy over files from the location you are hoping to mount on top, before replacing it with your new filesystem.
# mkdir /mnt/home2
  • We create new directory inside /mnt directory
# mount /dev/sdb1 /mnt/home2
  • Mounting partition sdb1 into /mnt/home2 directory
# cp -rp --preserve=all /home/* /mnt/home2/
  • Copying all files from /home/ into /mnt/home2/
  • -r to recursively copy all directories inside /home/
  • –preserve=all to preserve things such as SELinux, ownerships, timestamps

Step 4: Unmounting partition from temporary /mnt/home2 and mount it to /home

# cd / 
# umount /mnt/home2/
# mount /dev/sdb1 /home

How it works:

  • We split sdb into tw partitions (sdb1 and sdb2) using fdisk (format disk)
  • We had to give the disk partition table where it could store the information about the partition we are creating. The classinc partition table is Master Boot Record (MBR) and new is GUID Partition Table (GPT). MBR is older one and GPT is better to be used because it allows things such as more than four primary partitions.

View partitions on a disk:

# fdisk

Command (m for help): p

  • These logical spaces can have a filesystem applied on top of them, so that when OS tries to write files to the disk, disk knows a way to store that data.
  • Once done, disk can be mounted anywhere in the filesystem hierarchy, replacing any path you want.
  • This works because Linux does not care how many disks are attacked to your system or what type of disk they are. All it cares about are the mount points.

8. Re-configuring a disk using LVM

  • Format the second disk in our system
  • This time, we will use LVM to do that.
  • We will use various LVM tools (lvs, pvs, vgs)
  • Before creating new logical volume, we create a filesystem and mounting it somewhere on our system.

How to do it:

  • Some people like to first create a partition on drive before introducing it to the LVM lifestyle
  • We are going to use fdisk

Create a partition

# printf “g\nn\n\n\n\nt\n31\nw\n” | fdisk /dev/sdc
  • n31 is partition ID of 31
# parted /dev/sdc name 1 “HelloLinux”
  • Giving our partition a name

Present the partition to LVM:

# pvcreate /dev/disk/by-partlabel/HelloLinux
# pvs 
  • To check our new partition

Create volume group:

# vgcreate VolGroup01 /dev/disk/by-partlabel/HelloLinux

Create a logical volume within this group:

# lvcreate -l 50%FREE -n Home3 VolGroup01
# lvs
  • List logical volumes
  • It is using 50% of the VolGroup01 space

Creating filesystem on the disk:

# mkfs.btrfs /dev/mapper/VolGroup01-Home3

Creating mount point and mounting file system:

# mkdir /mnt/home3
# mount /dev/mapper/VolGroup01-Home3 /mnt/home3

Confirm changes

# lsblk

What we did here:

  • We have our physical disk (sdc)
  • We have a partition on top of our physical disk (sdc1)
  • We have our volume group, with our physical volume inside (VolGroup01)
  • We have our logical volume, on top of our volume group (Home3)
  • We have our filesystem, on top of our logical volume, which we then mounted at /mnt/home3

We created a virtual block device, in the form of our logical volume. This logical volume will have data written to it and will apply that data to a physical volume in the volume group.

One thought on “Chapter 5 – Hardware and Disks

Leave a Reply