From: Michal on
On 06/07/2010 16:16, Kent West wrote:
> I am a RAID newb.
>
> I have a brand-new Dell Precision T1500 workstation with two 700GB SATA
> drives, pre-loaded with Windows 7.
>
> Of course, Win7 has already been wiped off.
>
> My goal is to have a redundant Debian (Stable) system, such that the
> second drive is a mirror of the first drive. I would think RAID1 would
> be the route to go.
>
> However, being a RAID newbie, I'm running into all sorts of problems,
> not least of which that I simply don't understand some of the basic
> concepts.
>
>

Boasting that you "of course wiped W7 off" to then run in to problems
with RAID and then come here, is amusing. Maybe you should do some
research on RAID and how to do it in Debian, there are many helpful
guides on settings up systems, however these are only guides and you
also need to know how to recover from disk failures, raid controller
failures and other such fun things. W7 has some very easy to use
software raid tools which would have at least given you a helping start,
I'm not saying they are the best, but they certainly are easy to get
your head around.

Checkout some guides, check the archives and look up basic terminology,
my fear is you will get a RAID1 set up for your data, to then spend
countless hours trying to get your data back when something goes wrong,
because you don't know how to do it, something which can be all too commmon


--
To UNSUBSCRIBE, email to debian-user-REQUEST(a)lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster(a)lists.debian.org
Archive: http://lists.debian.org/4C334D5B.2020503(a)ionic.co.uk
From: Bob Proulx on
Kent West wrote:
> I thought there was HARDWARE RAID and SOFTWARE RAID.

There is.

> Not knowing anything about it, I would have thought that with
> HARDWARE RAID, you'd go into the BIOS of the special
> drive-controller hardware (Ctrl-I in this case, just after the

Consider this problem of using hardware raid from the motherboard. It
is typically a proprietary scheme and can only be replicated on
another similar motherboard. If while using this scheme your
motherboard dies then you may find yourself with two perfectly good
disks but no way to get the data off of them other than to find
another similar motherboard. In fact even in environments with lots
of similar hardware I have seen problems trying to move disks using
hardware raid from one system to another while preserving the raided
data.

I recommend that you use Linux kernel software raid instead. It is
quite reliable. On the negative side if your BIOS boots only the
first disk and that is the disk that dies then the machine cannot be
booted unattended after a disk failure even if the second disk has a
full good copy of everything. You would need to rotate the drives in
that case.

> that then when you go into the installation of the OS (Debian, in
> this case), the OS's partitioner would see one drive (the main one)
> and not see the other one, because the hardware RAID controller

After hardware raid is set up then original drive will not be seen but
a single new raid raid device will be available.

> I thought that with SOFTWARE RAID, all the RAID stuff would be done in
> the OS, and there'd be no need for a hardware controller.

Yes. With Linux kernel software raid all of the raid is done in
software by the Linux kernel. This is completely separate from using
on motherboard hardware raid.

> But apparently this RAID hardware controller (Intel Rapid Storage
> Technology - Option ROM - 9.5.0.1037) is some sort of hybrid FIRMWARE
> RAID system.

I recommend that you ignore the hardware raid and instead use software
raid. The Debian installer can set up software raid for you at system
installation time. It is easy. But it is also a little confusing.

> If I set up the controller's BIOS screen to either RAID1 or RECOVERY,

To use software raid you wouldn't set up any bios raid at all. You
would leave the drives unconfigured so that each drive is available
individually.

> when I start the Debian installer and get to the partitioning scheme, I
> see a single 700GB RAID partition with a 1MB unusable partition, and
> then the /dev/sda and /dev/sdb drives.

Any partitions you are seeing here are probably left over from the
previous operating system that you had installed on the disk before.
You will probably need to erase the old partitions and start fresh.

> If I set up the controller's BIOS screen to non-RAID for the drives, and
> then use only the installer's partitioner to manually setup all my
> partitions on both drives, making them identical, and then use the RAID
> option in the partitioner to create a RAID partition for each actual

You have things in the wrong order. First erase all partitions, then
set up the partitions for the physical raid. That will create a new
logical device that is the raid of the two physical devices. Then set
up your operating system partitions *once* on the newly created raid
device.

> partition and set mount points, the installer continues, but then won't
> install grub or lilo.

I always create a separate /boot partition because I also configure
LVM and grub doesn't know how to boot off of lvm. Therefore a
separate /boot (on a software raid /dev/md0 logical partition) enables
grub to boot. (I am talking grub from Lenny stable and before. I
don't know if grub2 in Squeeze and later adds this capability.) I
still thinking having a separate /dev/md0 for /boot makes a lot of
sense.

Don't forget that you should also put your swap on raid as well.

> In short, I have no idea what I'm doing, or how to do it.

Take a deep breath. Sit in a calm place for a bit. Then go back to
it and start again from the beginning. Configure the BIOS to not set
up any raid at all. The drives are just two normal drives. Then use
the debian-installer to erase all previous partitions. Start with a
clean slate.

Read through one of the many guides about installing Debian that will
walk you through the installation process step by step. I always set
up lvm and so off the top of my head I would repeat the steps for
using lvm. I think lvm is the way to go but the extra layer may be
too much for you here. In any case, set up the physical raid device
first and then set up the system partitions afterward.

Bob
From: Camaleón on
On Tue, 06 Jul 2010 10:16:17 -0500, Kent West wrote:

> I am a RAID newb.

(...)

> In short, I have no idea what I'm doing, or how to do it.

First, hardware raid and software raid are two different worlds
(involving different management techniques and performance). There is
still a third raid type ("fake-raid") which I would avoid as much as
possible (this involves using a firmware and "dm" raid setup under OS).

Second, are you sure you need a raid setup? :-)

If so, and being newbie, I would go for a hardware raid controller (easy
to setup). If not possible, then I should try software raid (linux "md")
but this requires a bit more time to get used to. I would test first on a
test environment because while playing with hard disks you can easily
lose your data.

> Should I use RAID at all, or should I use some other cloning technique,
> perhaps dd or rsync from a periodic cron job? (I was able to
> successfully install/boot a complete Stable system with RAID off in the
> firmware BIOS, installing onto the first drive like it was the only
> drive in the system.)

You should use (regarding raided or not) a good backup strategy :-)

> If I use RAID, do I set the Intel Rapid Storage Technology BIOS options
> to RAID1, RECOVERY, or no RAID?

I would not go that way (Intel raid tends to be "fake-raid") :-/

> How do I get around the problems I'm seeing in the Debian paritioner (no
> bootable flag, can't create multiple partitions or partitions are empty
> after install)?

If you want to use pure software raid (linux raid), there is no need to
setup nothing in the BIOS. Just tell the partitioner you ant to setup a
raid1 level between the 2 disks.

Debian Installation Guide will tell you about this:

6.3.2.3. Configuring Multidisk Devices (Software RAID)
http://www.debian.org/releases/stable/amd64/ch06s03.html.en#di-partition

Greetings,

--
Camaleón


--
To UNSUBSCRIBE, email to debian-user-REQUEST(a)lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster(a)lists.debian.org
Archive: http://lists.debian.org/pan.2010.07.06.16.13.19(a)gmail.com
From: Michal on

> So..., you're saying that if I want to learn to use RAID, I should use
> Windows?
>
> I've been doing research on RAID for the past week, and none of the
> documentation I've found addresses my issues.
>
>

No no, not at all. Have you tried running through a guide? A quick
search will give you something like this
http://www.howtoforge.com/software-raid1-grub-boot-debian-etch Running
through guides is a good start to give you a feel of what's going on and
how to do it, from there you can explore other ideas, ones that maybe
better suited to you and your setup. Also you can see how you can
recover. It's always best to test you actually know how to recoverer
from raid failures rather then just think you know to find out that you
where incorrect.

finding out what hardware and software raid is only needs a wikipedia
search for example, and as to use soft or hardware raid, have a search
at the archives or just debates on the internet. Just remember that if
you use hardware raid and your controller fails you will need to replace
that. Can you use any controller or does it have to the exact same one?
If it needs to be the exact same one and it's built in to the
motherboard, what do you do? These are the questions you need answers
to. Setting up software raid in Linux is very easy and you can do it at
install. Tools like mdstat and mdadm etc will help you out


--
To UNSUBSCRIBE, email to debian-user-REQUEST(a)lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster(a)lists.debian.org
Archive: http://lists.debian.org/4C335827.60003(a)ionic.co.uk
From: Miles Fidelman on

>> On 06/07/2010 16:16, Kent West wrote:
>>> I am a RAID newb.
>>>
>>> My goal is to have a redundant Debian (Stable) system, such that the
>>> second drive is a mirror of the first drive. I would think RAID1 would
>>> be the route to go.
>>>
>>> However, being a RAID newbie, I'm running into all sorts of problems,
>>> not least of which that I simply don't understand some of the basic
>>> concepts.
Well, perhaps you should read up on the basic concepts. I'd start with:

http://en.wikipedia.org/wiki/RAID
http://en.wikipedia.org/wiki/RAID_1

Then, for Linux (software) RAID:
https://raid.wiki.kernel.org/index.php/Linux_Raid
or in a somewhat more readable (but older) form:
http://tldp.org/HOWTO/Software-RAID-HOWTO.html

Then, for Debian RAID: Read the sections on RAID in the Debian
installation manual.

Basic summary of steps:

1. start installer, go through initial steps (keyboard, network, etc.)

2. start up disk partitioner
-- create partitions - here's what I go with (but for servers) - do this
for each drive (personally, I find it easer to do this with fdisk)
---- part1 boot primary Linux RAID 2G
---- part2 primary Linux RAID 3G for swap
---- part3 primary Linux RAID <rest of the space> for root
-- set up RAIDs
---- /dev/md0 - 2G - ext3 (or ext4) file system, mount point is /boot
---- /dev/md1 - 3G - swap
---- /dev/md2 - <REMAINDER OF DRIVE> - ext3(4) file system, mount point /

3. rest of the installation procedure

4. after you've rebooted and are into your newly set up system:

to make sure that you have grub installed on the MBRs of both drives (do
some googling on boot +RAID1 to understand, also read up on grub)

shell> grub-install hd0
shell> grub-install hd1

and then edit /boot/grub/menu.lst:

#add these to boot off your second drive if the first fails
default 0
fallback <n> -- n depends on where you duplicate your boot clause

#duplicate your first boot clause, but change the line "root (hd0,0)" to
"root (hd1,0)"

##test this - reboot, TAB to get into the boot screen, select the
fallback boot, see if it works

5. One serious gotcha to watch out for, down the road: If one drive
starts failing, the first symptom is often long delays in access time
(the internal drive software keeps trying to read data, and if it can,
it will eventually return). Unfortunately, Linux software RAID treats
this as perfectly ok behavior - it will keep the drive in the array.
But... your entire machine will slow to a complete crawl. Confusing as
hell, until you realize what's going on. (On some laptops, a shorted
battery causes the same symptom).

Some things to do:

- set up smart tools, keep an eye on the Raw_Read_Error_Rate value - if
it's anything other than 0, start worrying

- install something like atop (or precisely like atop) - if it shows
that that one of your drives is near 100% busy, it's probably failing

- if a drive is failing, use mdadm to "fail" the drive - things will
start performing a lot better; then replace the drive

- be VERY careful during recovery, it's pretty easy to destroy the good
copy of your data (I learned this the hard way, the first time I ran
into this particular failure mode)

- as someone pointed out, RAID is not a substitute for backup - the
first time I had to rebuild from a disk crash, despite RAID, I was glad
all my user data was backed up. Either set up an external drive or
subscribe to something like CrashPlan.

Miles Fidelman


--
In theory, there is no difference between theory and practice.
In<fnord> practice, there is. .... Yogi Berra



--
To UNSUBSCRIBE, email to debian-user-REQUEST(a)lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster(a)lists.debian.org
Archive: http://lists.debian.org/4C3364B5.8000005(a)meetinghouse.net