Articles I've written for customers on IT issues.
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

242 lines
6.5 KiB

4 years ago
  1. \documentclass[11pt]{article}
  2. %Gummi|065|=)
  3. \title{\textbf{RAID on GnuLinux - Mdadm}}
  4. \usepackage{graphicx}
  5. \usepackage{caption }
  6. \author{Steak Electronics}
  7. \date{07/31/19}
  8. \begin{document}
  9. %\maketitle
  10. \textbf{RAID on GnuLinux - Mdadm}
  11. %\textbf{Todo}
  12. \section{Overview}
  13. There are a few options for RAID on Gnu Linux. Among them is BtrFS, ZFS, however today I will focus on the software RAID solution using mdadm. This is historically the oldest software raid, therefore should be better vetted, although its performance may be slightly less of that of the first two mentioned. For simple servers, mdadm might be the most stable choice.
  14. \section{Details}
  15. I've worked with this in setting up some Core 2 Duo PCs, with 2 to 4 Sata HDDs. Let's begin.
  16. \\
  17. \\
  18. \subsection{Creation of RAID:}
  19. I'll need to have partitions be the same if adding a replacement or new disk.
  20. I'm going to make a boot partition of 10GB,
  21. a swap of 2GB
  22. and the 50GB home / data partition
  23. First let's clear partition tables, with sgdisk again.
  24. \footnote{Ref: https://github.com/zfsonlinux/zfs/wiki/Debian-Stretch-Root-on-ZFS}
  25. \begin{verbatim}
  26. sgdisk --zap-all /dev/sda
  27. sgdisk --zap-all /dev/sdb
  28. sgdisk --zap-all /dev/sdc
  29. *needs gdisk
  30. fdisk /dev/sda
  31. First put the 55GB Root in.
  32. n
  33. -return
  34. -return
  35. -return
  36. +55G
  37. Then swap
  38. n
  39. -return
  40. -return
  41. -return
  42. +8G
  43. t
  44. -return
  45. 82
  46. \end{verbatim}
  47. This is for swap
  48. The setup will be root of 55G, then swap.
  49. We will be generous with swap, even though it's probably not necessary to go
  50. over 4GB
  51. Do this for all HDD in the raid.
  52. EDIT: you can clone hdds partitions tables. See further down this doc.
  53. \subsection{Details of RAID:}
  54. \begin{verbatim}
  55. root@advacoONE:/dev# sudo mdadm -D /dev/md127
  56. /dev/md127:
  57. Version : 1.2
  58. Creation Time : Fri Feb 1 01:00:25 2019
  59. Raid Level : raid1
  60. Array Size : 57638912 (54.97 GiB 59.02 GB)
  61. Used Dev Size : 57638912 (54.97 GiB 59.02 GB)
  62. Raid Devices : 3
  63. Total Devices : 2
  64. Persistence : Superblock is persistent
  65. Update Time : Fri Feb 1 02:40:44 2019
  66. State : clean, degraded
  67. Active Devices : 2
  68. Working Devices : 2
  69. Failed Devices : 0
  70. Spare Devices : 0
  71. Name : devuan:root
  72. UUID : 83a8dc03:802a4129:26322116:c2cfe1d4
  73. Events : 82
  74. Number Major Minor RaidDevice State
  75. - 0 0 0 removed
  76. 1 8 17 1 active sync /dev/sdb1
  77. 2 8 33 2 active sync /dev/sdc1
  78. root@advacoONE:/dev#--
  79. \end{verbatim}
  80. so you can see, one was removed (it auto removes, when unplugged)
  81. \\
  82. \\
  83. \subsection{Add Drive to RAID:}
  84. sudo mdadm --add /dev/md127 /dev/sda1
  85. \\
  86. \\
  87. NOTE2: If you setup 2 hdds, in a raid, and want to add a third, if you just --add, it will show up as a spare...
  88. if you do mdadm --grow /dev/md127 -raid-devices=3 then the third might be active sync (what we want)
  89. note that the --grow, seems to allow for parameter changes after you have already created the raid. you can also specify
  90. the exact same command, raid-devices=3 in the setup of the raid (see install doc)
  91. \\
  92. \\
  93. NOTE: don't worry about mkfs.ext4 on the raid members, they are their own file system type
  94. \\
  95. \\
  96. NOTE: if you have a new drive and need to copy the hdd partition tables:
  97. https://unix.stackexchange.com/questions/12986/how-to-copy-the-partition-layout-of-a-whole-disk-using-standard-tools
  98. or aka
  99. \begin{verbatim}
  100. (FOR MBR ONLY)
  101. Save:
  102. sfdisk -d /dev/sda > part_table
  103. Restore:
  104. sfdisk /dev/NEWHDD < part_table
  105. (FOR GPT:)
  106. # Save MBR disks
  107. sgdisk --backup=/partitions-backup-$(basename $source).sgdisk $source
  108. sgdisk --backup=/partitions-backup-$(basename $dest).sgdisk $dest
  109. # Copy $source layout to $dest and regenerate GUIDs
  110. sgdisk --replicate=$dest $source
  111. sgdisk -G $dest
  112. \end{verbatim}
  113. NOTE: don't worry about mkfs.ext4 on the raid members, they are their own file system type
  114. No need for ext4 here.
  115. \begin{verbatim}
  116. root@advacoONE:/dev# mdadm --add /dev/md127 /dev/sda1
  117. mdadm: added /dev/sda1
  118. root@advacoONE:/dev# sudo mdadm -D /dev/md127
  119. /dev/md127:
  120. Version : 1.2
  121. Creation Time : Fri Feb 1 01:00:25 2019
  122. Raid Level : raid1
  123. Array Size : 57638912 (54.97 GiB 59.02 GB)
  124. Used Dev Size : 57638912 (54.97 GiB 59.02 GB)
  125. Raid Devices : 3
  126. Total Devices : 3
  127. Persistence : Superblock is persistent
  128. Update Time : Fri Feb 1 02:41:43 2019
  129. State : clean, degraded, recovering
  130. Active Devices : 2
  131. Working Devices : 3
  132. Failed Devices : 0
  133. Spare Devices : 1
  134. Rebuild Status : 0% complete
  135. Name : devuan:root
  136. UUID : 83a8dc03:802a4129:26322116:c2cfe1d4
  137. Events : 92
  138. Number Major Minor RaidDevice State
  139. 3 8 1 0 spare rebuilding /dev/sda1
  140. 1 8 17 1 active sync /dev/sdb1
  141. 2 8 33 2 active sync /dev/sdc1
  142. root@advacoONE:/dev#
  143. \end{verbatim}
  144. Looks good.
  145. \begin{verbatim}
  146. Rebuild Status : 6% complete
  147. Name : devuan:root
  148. UUID : 83a8dc03:802a4129:26322116:c2cfe1d4
  149. Events : 103
  150. Number Major Minor RaidDevice State
  151. 3 8 1 0 spare rebuilding /dev/sda1
  152. 1 8 17 1 active sync /dev/sdb1
  153. 2 8 33 2 active sync /dev/sdc1
  154. \end{verbatim}
  155. as it progresses, you see the RAID rebuilding.
  156. \begin{verbatim}
  157. watch -n1 cat /proc/mdstat
  158. Every 1.0s: cat /proc/mdstat
  159. advacoONE: Fri Feb 1 02:43:24 2019
  160. Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
  161. md127 : active raid1 sda1[3] sdb1[1] sdc1[2]
  162. 57638912 blocks super 1.2 [3/2] [_UU]
  163. [==>..................] recovery = 11.2% (6471936/57638912) finish=13.2min speed=64324K/sec
  164. unused devices: <none>
  165. \end{verbatim}
  166. \textbf{WARNING:} Reinstall grub on the drive again as well afterwards.
  167. \subsection{Email Notifications on mdadm}
  168. Test emails on mdadm.. first configure email however you prefer (i currently use ssmtp, see this link: wiki.zoneminder.com/SMS\_Notifications)
  169. then edit /etc/mdadm/mdadm.conf to have your email in mailaddr
  170. then
  171. \begin{verbatim}
  172. sudo mdadm --monitor --scan --test --oneshot
  173. \end{verbatim}
  174. should send an email
  175. https://ubuntuforums.org/showthread.php?t=1185134
  176. for more details on email sending
  177. \section{References}
  178. \begin{verbatim}
  179. The section about degraded disks
  180. https://help.ubuntu.com/lts/serverguide/advanced-installation.html.en
  181. General partition tips.
  182. https://github.com/zfsonlinux/zfs/wiki/Debian-Stretch-Root-on-ZFS
  183. SSMTP email setup:
  184. wiki.zoneminder.com/SMS\_Notifications
  185. \end{verbatim}
  186. \end{document}