From: David Bolt on
On Wednesday 20 Jan 2010 14:37, while playing with a tin of spray paint,
houghi painted this mural:

> David Bolt wrote:

>> Interesting. I'm going to have to look this up and find out how I
>> managed to miss hearing about it.
>
> There wasn't any communication done about this. It was told at the
> convention in Nuerberg.

That explains how I didn't manage to hear about it.

>> From my point of view, there's a heck of a difference. Apart from
>> anything else, I'm still not convinced about the long term reliability
>> of memory based devices. Then again, I've not used any memory devices
>> heavily for long enough to make any of them fail. I tend to use a
>> memory card or two for a few years, then upgrade them to cards of two
>> to four times the size. As for USB keys, the only reason I have any
>> other than the first 512MB one I bought is because someone else got me
>> them as gifts. I rarely use them and, if I didn't have them, wouldn't
>> really miss them.
>
> I only use them as temporary storage device. Most of the time those are
> SD cards.

That's about the only use I'd have for them, and only for a system that
isn't net-connected. If it is, I have web server with a password
protected directory to dump stuff. And it the stuff is bigger than
500MB, or is needed within an hour or two, well a stack of rewritable
DVDs or a 120GB USB drive, and a delivery job can handle that.

>> I usually use the second command I gave to split the received archive
>> into chunks. Doing it that way means not having to split the file up
>> before sending it, nor having to split the file up after it's been
>> received. There's no noticeable slowdown by using a pipe and, with the
>> systems being on the same LAN, there's no need for passing them through
>> an ssh tunnel.
>
> I do not tar or gzip my backups. As it is done by a cronjob, I don't
> even mind any delay there is with the ssh overhead.

>>> Yep and 2TB is some sort of limit.
>>
>> I think I recall a thread on the opensuse mailing list about creating
>> 2TB+ file systems. I'll have to have a look through my archives to see
>> if I can find it, and find out what were the end results of the
>> discussion.
>
> There are ways around it, but none of them realy easy. From what I
> recall when we looked into it, what you do is make several partitions
> and then use LVM to join them.

Sounds like a bit of an annoyance.

> However that ment (I think) that you
> first would need a working OS, then do the installation without
> formatting the large partition.

Maybe. I suppose it could be done when installing the OS, just as you
would with any ordinary partitioning arrangement.

>>> There are ways around it, but when
>>> you have a blank system it is not as easy as it looks.
>>
>> One more reason for having a separate non-RAID drive.
>
> Not really. You can have the OS on / on a partition of say 10GB on the
> RAID. We could have used the 1.5GB swap for this. Install the OS there,
> do the installation and then reformat the 1.5 as swap.

Sounds like a lot of fiddling.

> I think it is strange that there still is a 2TB limit where 2TN disks
> are available already.

It's possible that this 2TB limit is one that is a legacy issue that
no-one is gotten around to fixing. With drives already at this limit, I
don't think it'll be long before it's sorted out properly. I just hope
they don't introduce a 2PB or 2EB issue when fixing it.

> What if 3TB disks come out? You would not be able
> to have that in one partition if you liked and for companies 2TB is not
> that exceptional anymore.

I don't think it'll that be long before 2TB is fairly common for home
users.

>>> Few things:
>>> 1) Why the seperate IDE drive
>>
>> I'm not sure how to get grub to boot RAID array if the device it thinks
>> is the boot device is broken or missing. The separate IDE device is
>> there to provide insurance if I can't figure it out.
>
> We have had no issues there.

Still, I've had no experience of it at all and so I'm going to keep it
as a sort of comfort blanket. At some point, I'll have to put it aside,
but it'll be handy to have at least for a short time.

>> Another is that I'd
>> rather not be tied to a particular hardware solution. If I go with a
>> software RAID, there's no RAID controller to break, and I can move the
>> drives onto another motherboard and it should still just work. Well,
>> hopefully it will.
>
> Sure, raiod controlers do break, but so does anything else and a raid
> controler breaking will happen much less then HD breaking.

Quite possibly, but it's still one less issue to have to deal with.

>> The other reason for not going for hardware RAID is there isn't really
>> a need to do so. It's not going to be used in a performance critical
>> capacity. All it'll be used as is a central file server for both my
>> and my wives machines. As long as it's capable of chucking out data at
>> around 10MB/s, it'll be fine.
>
> I tried soft raid once and had it fail extremely hard. So hard that I
> lost a lot of data.

I still intend to keep backups of the more important stuff, so any
failure shouldn't cause more than a minimal data-loss. Still, it would
be annoying to have any data-loss.

>> I had thought about that, but I've still got to figure out how to get
>> it to boot properly if the device that grub thinks is the boot device
>> fails. I may end up having to put /boot onto that single IDE device to
>> work around it. Or maybe not. I haven't looked too hard for how to deal
>> with that situation yet, although I think I can see a way to do it by
>> modifying /boot/grub/menu.lst so it points to a working drive. Of
>> course, it would rely on the system not going down between me finding
>> out the drive has died, and making the required changes.
>
> You can put it on e.g. sda or on /.


>> However, having never handled anything like this before, it's
>> completely new territory and I think it'll be both a fun and useful
>> learning experience. I'm actually looking forward to deliberately
>> breaking the arrays so I can try fixing them without data loss.
>
> That is what I thought, till I lost all my data. :-(

That's one of the reasons for me practising on a virtual machine now,
and then doing more practising with the real hardware, all before
actually bring it on-line.

>>> 2) As you start with 3 1TB drives, you will stay below the 2TB limit. If
>>> you add another one, you will have problems
>>
>> Maybe. It will be interesting to see if the YaST partitioner can create
>> 2TB+ partitions. If it can, well that will save me from figuring out
>> how to do the partitioning. That is both good and bad, in that I'll end
>> up not having to learning how to do it.
>
> I think it is a limitation on other things, not so much the OS itself.
> Would be interesting to hear if it works.

I'll let you know, as and when I get it all up and running.

>>> and 1.5TB or 2TB HDs are not
>>> that expensive anymore.
>>
>> No, they are coming down in price. Don't know about reliability though.
>> I'm still a little concerned about that, especially with the issue of
>> backing up.
>
> There is a new way of looking at backup if you have RAID.

Well, the issue would be how to back up several TB of data, and where
to store it. My present backup solution, which isn't a particularly
good one, is to store several copies of important material one more
than one machine, and to have a further backup on DVD for the stuff
I can't duplicate/recreate.

>> Sounds like they wanted RHEL but didn't want to pay for it. Who's the
>> unlucky person that ends up providing support when things go wrong?
>
> Well, that would be a cow orker and myself. They are both webservers and
> only the needed stuff is on it. So basicaly just LAMP and mc. t will be
> fun to repair it when it breaks down. :-D

That almost looks like you looking forward to it breaking.


Regards,
David Bolt

--
Team Acorn: www.distributed.net OGR-NG @ ~100Mnodes RC5-72 @ ~1Mkeys/s
openSUSE 11.0 32b | | | openSUSE 11.3M0 32b
openSUSE 11.0 64b | openSUSE 11.1 64b | openSUSE 11.2 64b |
TOS 4.02 | openSUSE 11.1 PPC | RISC OS 4.02 | RISC OS 3.11
From: J. van der Waa on
houghi wrote:
> J. van der Waa wrote:
>> Before you start upgrading your old laptop consider the following:
>> I had a very nice Compaq Armada 6500 laptop running 10.3 fine, but
>> upgrading it to a newer version of Open Suse, it turned out that the
>> latest developments were a little bit too much for the poor thing
>> (especially KDE 4).
>
> Run XFCE or Windowmaker or IceWM instead. Especially the last one is
> very light.
To be honest.... I don't want to search for solutions when they don't
come "out of the box" (in this case the Open Suse dvd). I have spent too
many hours in getting things working, and I currently just don't have
the time to dig into other solutions. I still have my Toshiba Tablet
with more memory, and that one is working fine (next to my desktop and
Office Thinkpad).
>
>> (sorry, but all i Dutch).
>
> What? Are there people who can't read Dutch? I was able to do that
> since I was 6 or 7 or so, so it can't be all that hard.
I have been wondering this too, sometimes I just don't get it....
>
> houghi
From: David Bolt on
On Thursday 21 Jan 2010 12:41, while playing with a tin of spray paint,
houghi painted this mural:

> David Bolt wrote:

>> Maybe. I suppose it could be done when installing the OS, just as you
>> would with any ordinary partitioning arrangement.
>
> It wasn't possible with either CentOS or Fedora.

The most recent versions? If so, I'm looking forward even more to
getting started on this project just to find out if openSUSE is ahead
of the pack.

> Didn't try openSUSE as
> I did not have it with me. I somehow doubt it will be possible in one
> easy step. As I do not have the HD capacity at home, I can't check it
> nor could I do any bugreporting.

This system has close to 2TB in it but it's split between two 500GB
drives and a single 1TB drive so can't test it quite yet. Hopefully, in
a week, or so, I should be able to get started. Just a few other
commitments to get out of the way first.

>>> Not really. You can have the OS on / on a partition of say 10GB on the
>>> RAID. We could have used the 1.5GB swap for this. Install the OS there,
>>> do the installation and then reformat the 1.5 as swap.
>>
>> Sounds like a lot of fiddling.
>
> That is why we didn't bother. :-D

Not surprised.

>> It's possible that this 2TB limit is one that is a legacy issue that
>> no-one is gotten around to fixing. With drives already at this limit, I
>> don't think it'll be long before it's sorted out properly. I just hope
>> they don't introduce a 2PB or 2EB issue when fixing it.
>
> I think it is strange that it is not already solved, as 2TB partitions
> should not be that uncommon in server enviroments.

Not working in that environment, I have no idea what sort of
arrangements they may have. A quick (ha!) test using VirtualBox, 2TB
expandable discs and software RAID shows that openSUSE can create a 4TB
RAID 5 array. Takes ages for the everything to get in sync and it'll be
interesting to see how big the resulting virtual drives end up when it
finishes.

Unfortunately, creating a 4TB ext4 partition at the same time building
the recovery records is going to take a while. A guestimate by mdadm is
for it to take around 6 months to finish. That's (hopefully!) way off
and that, once the ext4 file system is written, that estimate will drop
considerably.

As for what it would do with a hardware RAID situation and a single 4TB
device, I don't have the hardware available to test it out and so don't
know.

>> Well, the issue would be how to back up several TB of data, and where
>> to store it. My present backup solution, which isn't a particularly
>> good one, is to store several copies of important material one more
>> than one machine, and to have a further backup on DVD for the stuff
>> I can't duplicate/recreate.
>
> The how is easy. storeBackup can handle it and works very fast (after
> the first backup). The where is another question. As it is raid I would
> just place it on the raid and only extremely importand information on a
> remote server (In case the house burns down)

That's a part of the reason for having backups on DVD. I can drop off
the latest ones at a friends and that provides some security. The only
issue I'd have is if some form of disaster managed to cover an area big
enough to get us both before a restoration could occur.

>> That almost looks like you looking forward to it breaking.
>
> That is not the case (in case my boss is reading this as well. :-D )

Okay, that's convinced me you're dreading it ever happening :-)


Regards,
David Bolt

--
Team Acorn: www.distributed.net OGR-NG @ ~100Mnodes RC5-72 @ ~1Mkeys/s
openSUSE 11.0 32b | | | openSUSE 11.3M0 32b
openSUSE 11.0 64b | openSUSE 11.1 64b | openSUSE 11.2 64b |
TOS 4.02 | openSUSE 11.1 PPC | RISC OS 4.02 | RISC OS 3.11
From: David Bolt on
On Thursday 21 Jan 2010 16:10, while playing with a tin of spray paint,
J. van der Waa painted this mural:

> houghi wrote:

>> Run XFCE or Windowmaker or IceWM instead. Especially the last one is
>> very light.
> To be honest.... I don't want to search for solutions when they don't
> come "out of the box" (in this case the Open Suse dvd).

They are all available on the openSUSE DVD, with XFCE even having it's
own pattern.


Regards,
David Bolt

--
Team Acorn: www.distributed.net OGR-NG @ ~100Mnodes RC5-72 @ ~1Mkeys/s
openSUSE 11.0 32b | | | openSUSE 11.3M0 32b
openSUSE 11.0 64b | openSUSE 11.1 64b | openSUSE 11.2 64b |
TOS 4.02 | openSUSE 11.1 PPC | RISC OS 4.02 | RISC OS 3.11
From: Eef Hartman on
houghi <houghi(a)houghi.org.invalid> wrote:
> David Bolt wrote:
> I think it is strange that it is not already solved, as 2TB partitions
> should not be that uncommon in server enviroments.

ext3 has a 2 TB (with 1 KB blocksize), 4 TB cq 8 TB (with the default 4 KB
blocksize) issue, which cannot be fixed without breaking compatibility.
It has been fixed in ext4.
Servers mostly use JFS cq XFS for those large partitions.

I.E. our (new) home dir server has 8 x 1 TB disks in Raid5 configuration,
running CentOS 5. The largest RAID partition is some 3.7 TB:
/dev/sda1 688G 207M 688G 1% /export/home-vol
/dev/sda2 459G 246M 459G 1% /export/group-vol
/dev/sda3 3.7T 197M 3.7T 1% /export/bulk-vol
/dev/sda4 1.6T 197M 1.6T 1% /export/admin-vol
(they are called sda? as the RAID is HARDWARE in the blade storage
box, the PC just sees it as an (already partitioned) 7TB disk,
see this warning if you try *fdisk on it:
WARNING: The size of this disk is 7.0 TB (6998095560704 bytes).
DOS partition table format can not be used on drives for volumes
larger than 2.2 TB (2199023255040 bytes). Use parted(1) and GUID
partition table format (GPT).
--
*******************************************************************
** Eef Hartman, Delft University of Technology, dept. SSC/ICT **
** e-mail: E.J.M.Hartman(a)tudelft.nl - phone: +31-15-278 82525 **
*******************************************************************
First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Prev: kdetv
Next: OSS-11.2 tor/privoxy 1/2 done