My software raid experience in linux

Computer builds, hardware and software discussion or troubleshooting, including peripherals. Essentially a general place to talk about desktop computers.
Locked
User avatar
Red Squirrel
Posts: 29209
Joined: Wed Dec 18, 2002 12:14 am
Location: Northern Ontario
Contact:

My software raid experience in linux

Post by Red Squirrel »

It's always been said that software raid should not be touched with a 10 foot pole and hardware is better, but wanting to do a raid without spending lot of money on a good raid card (you get what you pay for, the cheap ones, and the ones built on mobos are not true raids) I decided to look at Linux software raid, with mdadm as I want a redundant data store for my VM server and its not really feasible to back up entire VMS, that and this server won't always be turned on so it would miss lot of backup jobs.

My current setup is a mid end server (actually, low end by today's standards, 32bit CPU, non dual core) with 4 IDE drives on their own channels.

1 IDE drive is the OS drive and I did not raid it as it would probably cause complications, specially in the event of a failure. So its just seperate, and I'll just image it as a backup for the OS setup. Then theres 3 500GB drives which are in software raid5.

In linux drives show up as /dev/hd* so each individual physical drive still shows up as this is not a hardware raid so they're still treated inderpendantly. But linux also creates a raid device which shows up as /dev/md* so in my case /dev/md0

I wont go through all the commands since I'm learning myself but lets just say it was fairly easy to figure out. everything is done with the mdadm command and this is some of the outputs you can get which are quite informative:

Code: Select all

[root@alderaan vmraid1]#
[root@alderaan vmraid1]# mdadm --misc --detail /dev/md0
/dev/md0:
        Version : 00.90.03
  Creation Time : Sun Mar 23 19:42:07 2008
     Raid Level : raid5
     Array Size : 976767872 (931.52 GiB 1000.21 GB)
    Device Size : 488383936 (465.76 GiB 500.11 GB)
   Raid Devices : 3
  Total Devices : 3
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Mon Mar 24 21:17:33 2008
          State : clean, degraded, recovering
 Active Devices : 2
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 64K

 Rebuild Status : 4% complete

           UUID : b1271f7e:10899be9:8487ddbb:7b3c3fed
         Events : 0.64

    Number   Major   Minor   RaidDevice State
       0      22        1        0      active sync   /dev/hdc1
       3       3        1        1      spare rebuilding   /dev/hda1
       2      34        1        2      active sync   /dev/hdg1
[root@alderaan vmraid1]#
[root@alderaan vmraid1]#
[root@alderaan vmraid1]#
[root@alderaan vmraid1]#
[root@alderaan vmraid1]#
[root@alderaan vmraid1]#
[root@alderaan vmraid1]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 hdg1[2] hdc1[0] hda1[3]
      976767872 blocks level 5, 64k chunk, algorithm 2 [3/2] [U_U]
      [=>...................]  recovery =  5.0% (24702588/488383936) finish=305.4min speed=25300K/sec

unused devices: <none>
[root@alderaan vmraid1]#

[code]




Now for the fun part.  To test to see how well this raid 5 works and how redundant it is, I decided to power off the server and unplug a drive.  To simulate a real life situation I should really do it while its live but these are IDE drives and at hardware level they're really not meant to be hot plugged so decided not to just in case (though it would probably be no harm).

So I rebooted, at that point the raid5 still mounted ok but it was missing 1 drive, which was indicated.  It also showed up as degraded.  Now what would be nice is if there was some kind of daemon that would send off an email when it detects such a state, but nothing stops me from scripting something that looks at the output, or maybe there does exist something I'm not aware of.

So I made some changes to the file system, added a new file with some data in it.

Again since IDE drives are not hot swappable I powered off to plugged the 3rd drive back in.  Powered on.  

Server came back up, no hiccups.  Now I would wish that it auto rebuilds, but guess its good that it does not since I don't want it to just go grab any random drive it finds, as that could be dangerous.  So I re-add the new drive to the array. Literally a 1 command thing. 

mdadm --manage -add /dev/md0 /dev/hda1

Then it says that its added, then at that point it starts rebuilding the array.  This is a rather long process though, but its in the background and you can still use the array.

Now I wanted to go a step further.  That server wont be on a UPS. So what if the power went out?  I flicked off the power switch on the back to simulate a power outage.  Waited for the drives to spin down (this server also sounds like an airplane when it powers down, so it must be really fast!).  Turned the server back on.  Booted right into the OS and my array still exists and continues to rebuild!  So at no given time did I get stuck at some kind of bootup error where I had to physically be at the server.  I'm actually quite impressed so far.

It's kind of hard to rate the speed at this point but it seems ok, not super fast but not really slow either.  Took about 10 minutes to create a 10Gig VM. And raid5 is probably not the best raid type to test this with.  Raid 0 would be interesting to try.  My mobo supports sata so maybe in the future I'll add a couple drives to make a raid0 array.

Theres also stuff like chunk sizes to look at.  For my case I should probably set my chunk size much higher as I'm dealing with larger files.


So if anyone is on a budget but wants raid, I'd defiantly take a look at mdadm command in Linux.   It's a pretty nice alternative to expensive raid controllers.  Also you can pretty much mix and match any kind of drive/type.  I have not tried it, but you can probably even raid weird things like USB sticks, external HDDs, or maybe even flash cards hooked up to a USB flash reader.  The possibilities are endless. :P    From looks of it, its also possible to grow arrays.  So I could probably add 2 more drives to my raid 5 to make a 1.5TB array, though not sure how it works at the partition level, guess I'd have to use a ext3 partition resizing tool that runs within the OS.  Also don't think the raid array will work if you boot with a live CD so the data wont be accessable.  Though if you reinstall the OS you should be able to reassemble the array. 

[color=#888888][size=85]Archived topic from Iceteks,  old topic ID:5020, old post ID:38726[/size][/color]
Honk if you love Jesus, text if you want to meet Him!
User avatar
rovingcowboy
Posts: 1504
Joined: Wed Dec 18, 2002 10:14 pm

My software raid experience in linux

Post by rovingcowboy »

all that when a 30 dollar or less card will work better?
i say less because i paid 5 usd for my last raid card i bought.
and it was the same company as the other one i have, just can't remember the name off hand right now. its the major one everybody gets and the rest all try to make as good as theirs. just go to tigerdirect dot com or other online store and search for raid cards their still being made and are fairly cheap.


Archived topic from Iceteks, old topic ID:5020, old post ID:38737
roving cowboy/ keith
User avatar
Red Squirrel
Posts: 29209
Joined: Wed Dec 18, 2002 12:14 am
Location: Northern Ontario
Contact:

My software raid experience in linux

Post by Red Squirrel »

Actually the 30 dollar ones probably wont work better, they're like the ones built into the motherboard, they still depend on software. It's the $1,000 ones that will work better. And even then if the card itself goes, your screwed royaly. :P

Archived topic from Iceteks, old topic ID:5020, old post ID:38738
Honk if you love Jesus, text if you want to meet Him!
Locked