\documentclass[11pt]{article}
|
|
%Gummi|065|=)
|
|
\title{\textbf{RAID on GnuLinux - Mdadm}}
|
|
\usepackage{graphicx}
|
|
\usepackage{caption }
|
|
\author{Steak Electronics}
|
|
\date{07/31/19}
|
|
\begin{document}
|
|
|
|
%\maketitle
|
|
\textbf{RAID on GnuLinux - Mdadm}
|
|
%\textbf{Todo}
|
|
\section{Overview}
|
|
There are a few options for RAID on Gnu Linux. Among them is BtrFS, ZFS, however today I will focus on the software RAID solution using mdadm. This is historically the oldest software raid, therefore should be better vetted, although its performance may be slightly less of that of the first two mentioned. For simple servers, mdadm might be the most stable choice.
|
|
\section{Details}
|
|
I've worked with this in setting up some Core 2 Duo PCs, with 2 to 4 Sata HDDs. Let's begin.
|
|
\\
|
|
\\
|
|
\subsection{Creation of RAID:}
|
|
I'll need to have partitions be the same if adding a replacement or new disk.
|
|
|
|
|
|
|
|
I'm going to make a boot partition of 10GB,
|
|
a swap of 2GB
|
|
and the 50GB home / data partition
|
|
|
|
First let's clear partition tables, with sgdisk again.
|
|
\footnote{Ref: https://github.com/zfsonlinux/zfs/wiki/Debian-Stretch-Root-on-ZFS}
|
|
|
|
|
|
\begin{verbatim}
|
|
sgdisk --zap-all /dev/sda
|
|
sgdisk --zap-all /dev/sdb
|
|
sgdisk --zap-all /dev/sdc
|
|
|
|
*needs gdisk
|
|
|
|
fdisk /dev/sda
|
|
First put the 55GB Root in.
|
|
|
|
|
|
n
|
|
-return
|
|
-return
|
|
-return
|
|
+55G
|
|
|
|
Then swap
|
|
n
|
|
-return
|
|
-return
|
|
-return
|
|
+8G
|
|
|
|
t
|
|
-return
|
|
82
|
|
\end{verbatim}
|
|
|
|
This is for swap
|
|
The setup will be root of 55G, then swap.
|
|
We will be generous with swap, even though it's probably not necessary to go
|
|
over 4GB
|
|
|
|
|
|
|
|
|
|
Do this for all HDD in the raid.
|
|
|
|
EDIT: you can clone hdds partitions tables. See further down this doc.
|
|
|
|
|
|
\subsection{Details of RAID:}
|
|
\begin{verbatim}
|
|
root@advacoONE:/dev# sudo mdadm -D /dev/md127
|
|
/dev/md127:
|
|
Version : 1.2
|
|
Creation Time : Fri Feb 1 01:00:25 2019
|
|
Raid Level : raid1
|
|
Array Size : 57638912 (54.97 GiB 59.02 GB)
|
|
Used Dev Size : 57638912 (54.97 GiB 59.02 GB)
|
|
Raid Devices : 3
|
|
Total Devices : 2
|
|
Persistence : Superblock is persistent
|
|
|
|
Update Time : Fri Feb 1 02:40:44 2019
|
|
State : clean, degraded
|
|
Active Devices : 2
|
|
Working Devices : 2
|
|
Failed Devices : 0
|
|
Spare Devices : 0
|
|
|
|
Name : devuan:root
|
|
UUID : 83a8dc03:802a4129:26322116:c2cfe1d4
|
|
Events : 82
|
|
|
|
Number Major Minor RaidDevice State
|
|
- 0 0 0 removed
|
|
1 8 17 1 active sync /dev/sdb1
|
|
2 8 33 2 active sync /dev/sdc1
|
|
root@advacoONE:/dev#--
|
|
\end{verbatim}
|
|
|
|
so you can see, one was removed (it auto removes, when unplugged)
|
|
\\
|
|
\\
|
|
\subsection{Add Drive to RAID:}
|
|
sudo mdadm --add /dev/md127 /dev/sda1
|
|
\\
|
|
\\
|
|
NOTE2: If you setup 2 hdds, in a raid, and want to add a third, if you just --add, it will show up as a spare...
|
|
if you do mdadm --grow /dev/md127 -raid-devices=3 then the third might be active sync (what we want)
|
|
note that the --grow, seems to allow for parameter changes after you have already created the raid. you can also specify
|
|
the exact same command, raid-devices=3 in the setup of the raid (see install doc)
|
|
\\
|
|
\\
|
|
NOTE: don't worry about mkfs.ext4 on the raid members, they are their own file system type
|
|
\\
|
|
\\
|
|
NOTE: if you have a new drive and need to copy the hdd partition tables:
|
|
https://unix.stackexchange.com/questions/12986/how-to-copy-the-partition-layout-of-a-whole-disk-using-standard-tools
|
|
or aka
|
|
|
|
|
|
\begin{verbatim}
|
|
(FOR MBR ONLY)
|
|
Save:
|
|
sfdisk -d /dev/sda > part_table
|
|
|
|
Restore:
|
|
sfdisk /dev/NEWHDD < part_table
|
|
|
|
(FOR GPT:)
|
|
# Save MBR disks
|
|
sgdisk --backup=/partitions-backup-$(basename $source).sgdisk $source
|
|
sgdisk --backup=/partitions-backup-$(basename $dest).sgdisk $dest
|
|
|
|
# Copy $source layout to $dest and regenerate GUIDs
|
|
sgdisk --replicate=$dest $source
|
|
sgdisk -G $dest
|
|
\end{verbatim}
|
|
|
|
NOTE: don't worry about mkfs.ext4 on the raid members, they are their own file system type
|
|
No need for ext4 here.
|
|
|
|
\begin{verbatim}
|
|
root@advacoONE:/dev# mdadm --add /dev/md127 /dev/sda1
|
|
mdadm: added /dev/sda1
|
|
root@advacoONE:/dev# sudo mdadm -D /dev/md127
|
|
/dev/md127:
|
|
Version : 1.2
|
|
Creation Time : Fri Feb 1 01:00:25 2019
|
|
Raid Level : raid1
|
|
Array Size : 57638912 (54.97 GiB 59.02 GB)
|
|
Used Dev Size : 57638912 (54.97 GiB 59.02 GB)
|
|
Raid Devices : 3
|
|
Total Devices : 3
|
|
Persistence : Superblock is persistent
|
|
|
|
Update Time : Fri Feb 1 02:41:43 2019
|
|
State : clean, degraded, recovering
|
|
Active Devices : 2
|
|
Working Devices : 3
|
|
Failed Devices : 0
|
|
Spare Devices : 1
|
|
|
|
Rebuild Status : 0% complete
|
|
|
|
Name : devuan:root
|
|
UUID : 83a8dc03:802a4129:26322116:c2cfe1d4
|
|
Events : 92
|
|
|
|
Number Major Minor RaidDevice State
|
|
3 8 1 0 spare rebuilding /dev/sda1
|
|
1 8 17 1 active sync /dev/sdb1
|
|
2 8 33 2 active sync /dev/sdc1
|
|
root@advacoONE:/dev#
|
|
\end{verbatim}
|
|
Looks good.
|
|
\begin{verbatim}
|
|
Rebuild Status : 6% complete
|
|
|
|
Name : devuan:root
|
|
UUID : 83a8dc03:802a4129:26322116:c2cfe1d4
|
|
Events : 103
|
|
|
|
Number Major Minor RaidDevice State
|
|
3 8 1 0 spare rebuilding /dev/sda1
|
|
1 8 17 1 active sync /dev/sdb1
|
|
2 8 33 2 active sync /dev/sdc1
|
|
\end{verbatim}
|
|
|
|
|
|
as it progresses, you see the RAID rebuilding.
|
|
|
|
|
|
\begin{verbatim}
|
|
watch -n1 cat /proc/mdstat
|
|
|
|
Every 1.0s: cat /proc/mdstat
|
|
advacoONE: Fri Feb 1 02:43:24 2019
|
|
|
|
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
|
|
md127 : active raid1 sda1[3] sdb1[1] sdc1[2]
|
|
57638912 blocks super 1.2 [3/2] [_UU]
|
|
[==>..................] recovery = 11.2% (6471936/57638912) finish=13.2min speed=64324K/sec
|
|
|
|
unused devices: <none>
|
|
\end{verbatim}
|
|
|
|
|
|
\textbf{WARNING:} Reinstall grub on the drive again as well afterwards.
|
|
|
|
\subsection{Email Notifications on mdadm}
|
|
Test emails on mdadm.. first configure email however you prefer (i currently use ssmtp, see this link: wiki.zoneminder.com/SMS\_Notifications)
|
|
|
|
then edit /etc/mdadm/mdadm.conf to have your email in mailaddr
|
|
then
|
|
\begin{verbatim}
|
|
sudo mdadm --monitor --scan --test --oneshot
|
|
\end{verbatim}
|
|
should send an email
|
|
|
|
|
|
|
|
|
|
https://ubuntuforums.org/showthread.php?t=1185134
|
|
for more details on email sending
|
|
|
|
\section{References}
|
|
\begin{verbatim}
|
|
The section about degraded disks
|
|
https://help.ubuntu.com/lts/serverguide/advanced-installation.html.en
|
|
|
|
General partition tips.
|
|
https://github.com/zfsonlinux/zfs/wiki/Debian-Stretch-Root-on-ZFS
|
|
|
|
SSMTP email setup:
|
|
wiki.zoneminder.com/SMS\_Notifications
|
|
\end{verbatim}
|
|
\end{document}
|