Articles I've written for customers on IT issues.
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

206 lines
6.3 KiB

  1. \documentclass[11pt]{article}
  2. %Gummi|065|=)
  3. \usepackage{xcolor}
  4. \usepackage[vcentering,dvips]{geometry}
  5. \geometry{papersize={6in,9in},total={4.5in,6.8in}}
  6. \title{\textbf{RAID on GnuLinux - Mdadm Reference}}
  7. \usepackage{graphicx}
  8. \usepackage{caption }
  9. \author{Steak Electronics}
  10. \date{07/31/19}
  11. \begin{document}
  12. %\maketitle
  13. \textcolor{green!60!blue!70}{
  14. \textbf{RAID on GnuLinux - Mdadm Reference}}
  15. %\textbf{Todo}
  16. %\tableofcontents
  17. \textcolor{green!60!blue!70}{
  18. \section{Overview}}
  19. There are a few options for software RAID on Gnu Linux. Among them is BtrFS and ZFS, however today I will focus on using mdadm. This is historically the oldest software raid, therefore should be better vetted, although its performance may be less of that of the first two mentioned - for simple servers, mdadm might be the most stable choice.
  20. \textcolor{green!60!blue!70}{
  21. \section{Details}}
  22. I've worked with this in setting up some Core 2 Duo PCs, with 2 to 4 Sata HDDs. This will be a reference. Let's begin.
  23. \\
  24. \\
  25. \textcolor{green!60!blue!70}{
  26. \subsection{Creation of RAID:}}
  27. Will not be covered here (yet). You must create the partition tables. Create the raid with mdadm. mkfs.ext4 on the raid partition. Add mdadm to grub config. Reinstall grub. Details may be provided later.
  28. \textcolor{green!60!blue!70}{
  29. \subsection{Details of RAID:}}
  30. \begin{verbatim}
  31. root@advacoONE:/dev# sudo mdadm -D /dev/md127
  32. /dev/md127:
  33. Version : 1.2
  34. Creation Time : Fri Feb 1 01:00:25 2019
  35. Raid Level : raid1
  36. Array Size : 57638912 (54.97 GiB 59.02 GB)
  37. Used Dev Size : 57638912 (54.97 GiB 59.02 GB)
  38. Raid Devices : 3
  39. Total Devices : 2
  40. Persistence : Superblock is persistent
  41. Update Time : Fri Feb 1 02:40:44 2019
  42. State : clean, degraded
  43. Active Devices : 2
  44. Working Devices : 2
  45. Failed Devices : 0
  46. Spare Devices : 0
  47. Name : devuan:root
  48. UUID : 83a8dc03:802a4129:26322116:c2cfe1d4
  49. Events : 82
  50. Number Major Minor RaidDevice State
  51. - 0 0 0 removed
  52. 1 8 17 1 active sync /dev/sdb1
  53. 2 8 33 2 active sync /dev/sdc1
  54. root@advacoONE:/dev#--
  55. \end{verbatim}
  56. so you can see, one was removed (it auto removes, when unplugged)
  57. \\
  58. \\
  59. \textcolor{green!60!blue!70}{
  60. \subsection{Add Drive to RAID:}}
  61. sudo mdadm --add /dev/md127 /dev/sda1
  62. \\
  63. \\
  64. NOTE2: If you setup 2 hdds, in a raid, and want to add a third, if you just --add, it will show up as a spare...
  65. if you do mdadm --grow /dev/md127 -raid-devices=3 then the third might be active sync (what we want)
  66. note that the --grow, seems to allow for parameter changes after you have already created the raid. you can also specify
  67. the exact same command, raid-devices=3 in the setup of the raid (see install doc). Note that if you lose a drive, you can simply add it.
  68. \\
  69. \\
  70. NOTE: don't worry about mkfs.ext4 on the raid members, after initial setup. The RAID will manage that.
  71. \\
  72. \\
  73. NOTE: if you have a new drive and need to copy the hdd partition tables:
  74. https://unix.stackexchange.com/questions/12986/how-to-copy-the-partition-layout-of-a-whole-disk-using-standard-tools
  75. or aka
  76. \begin{verbatim}
  77. (FOR MBR ONLY)
  78. Save:
  79. sfdisk -d /dev/sda > part_table
  80. Restore:
  81. sfdisk /dev/NEWHDD < part_table
  82. (FOR GPT:)
  83. # Save MBR disks
  84. sgdisk --backup=/partitions-backup-$(basename $source).sgdisk $source
  85. sgdisk --backup=/partitions-backup-$(basename $dest).sgdisk $dest
  86. # Copy $source layout to $dest and regenerate GUIDs
  87. sgdisk --replicate=$dest $source
  88. sgdisk -G $dest
  89. \end{verbatim}
  90. \begin{verbatim}
  91. root@advacoONE:/dev# mdadm --add /dev/md127 /dev/sda1
  92. mdadm: added /dev/sda1
  93. root@advacoONE:/dev# sudo mdadm -D /dev/md127
  94. /dev/md127:
  95. Version : 1.2
  96. Creation Time : Fri Feb 1 01:00:25 2019
  97. Raid Level : raid1
  98. Array Size : 57638912 (54.97 GiB 59.02 GB)
  99. Used Dev Size : 57638912 (54.97 GiB 59.02 GB)
  100. Raid Devices : 3
  101. Total Devices : 3
  102. Persistence : Superblock is persistent
  103. Update Time : Fri Feb 1 02:41:43 2019
  104. State : clean, degraded, recovering
  105. Active Devices : 2
  106. Working Devices : 3
  107. Failed Devices : 0
  108. Spare Devices : 1
  109. Rebuild Status : 0% complete
  110. Name : devuan:root
  111. UUID : 83a8dc03:802a4129:26322116:c2cfe1d4
  112. Events : 92
  113. Number Major Minor RaidDevice State
  114. 3 8 1 0 spare rebuilding /dev/sda1
  115. 1 8 17 1 active sync /dev/sdb1
  116. 2 8 33 2 active sync /dev/sdc1
  117. root@advacoONE:/dev#
  118. \end{verbatim}
  119. Looks good.
  120. \begin{verbatim}
  121. Rebuild Status : 6% complete
  122. Name : devuan:root
  123. UUID : 83a8dc03:802a4129:26322116:c2cfe1d4
  124. Events : 103
  125. Number Major Minor RaidDevice State
  126. 3 8 1 0 spare rebuilding /dev/sda1
  127. 1 8 17 1 active sync /dev/sdb1
  128. 2 8 33 2 active sync /dev/sdc1
  129. \end{verbatim}
  130. as it progresses, you see the RAID rebuilding.
  131. \begin{verbatim}
  132. watch -n1 cat /proc/mdstat
  133. Every 1.0s: cat /proc/mdstat
  134. advacoONE: Fri Feb 1 02:43:24 2019
  135. Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
  136. md127 : active raid1 sda1[3] sdb1[1] sdc1[2]
  137. 57638912 blocks super 1.2 [3/2] [_UU]
  138. [==>..................] recovery = 11.2% (6471936/57638912) finish=13.2min speed=64324K/sec
  139. unused devices: <none>
  140. \end{verbatim}
  141. \textbf{WARNING:} Reinstall grub on the new drive again as well afterwards.
  142. \textcolor{green!60!blue!70}{
  143. \subsection{Email Notifications on mdadm}}
  144. Test emails on mdadm.. first configure email however you prefer (i currently use ssmtp, see this link:
  145. https://wiki.zoneminder.com/How\_to\_get\_ssmtp\_working\_with\_Zoneminder
  146. --
  147. then edit /etc/mdadm/mdadm.conf to have your email in mailaddr
  148. then
  149. \begin{verbatim}
  150. sudo mdadm --monitor --scan --test --oneshot
  151. \end{verbatim}
  152. should send an email
  153. https://ubuntuforums.org/showthread.php?t=1185134
  154. for more details on email sending
  155. \textcolor{green!60!blue!70}{
  156. \section{References}}
  157. \begin{verbatim}
  158. The section about degraded disks
  159. https://help.ubuntu.com/lts/serverguide/advanced-installation.html.en
  160. General partition tips.
  161. https://github.com/zfsonlinux/zfs/wiki/Debian-Stretch-Root-on-ZFS
  162. SSMTP email setup:
  163. https://wiki.zoneminder.com/How_to_get_ssmtp_working_with_Zoneminder
  164. wiki.zoneminder.com/SMS_Notifications
  165. \end{verbatim}
  166. \end{document}