From: David Combs on
In article <4ae1ac77$0$83234$e4fe514c(a)news.xs4all.nl>,
Casper H.S. Dik <Casper.Dik(a)Sun.COM> wrote:
>chuckers <chuckersjp(a)gmail.com> writes:
>
>>So that goes back to an earlier question. Does it make sense to
>>remove SDS from the disks and repartition
>>c0t1d0s0 so that it is the whole disk and use that? That would make
>>the root pool use the entire 132GB.
>>Is that a good idea? How do I get my data off of c0t0d0s5 and
>>c0t0d0s6 onto the root pool file system?
>
>Possible, but perhaps you want the data in a different pool.
>
>I'm still using actual swap partitions/slices and I'm not using ZFS
>for swap.
>
>>Is it better to have a seperate root pool and data pool? Or is
>>tossing everything onto the root pool sufficient?
>>Is that dependent on whether I have only 2 disks or lots and lots of
>>disks?
>
>I would use a rpool and different pools for data because an rpool can only
>be a simple device or simple mirror.

Casper, a question: What's the REASON for that rule, that restriction,
for root-pools only?

Thanks!

David

From: David Combs on
In article <4ae1801f$0$83232$e4fe514c(a)news.xs4all.nl>,
Casper H.S. Dik <Casper.Dik(a)Sun.COM> wrote:
>chuckers <chuckersjp(a)gmail.com> writes:
>
>>What has me confused is the fact that the root pool for ZFS has to be
>>done on a slice rather than the entire
>>disk (as stated on p. 127 of the manual above.)
>
>The reason is that it's not possible to boot from EFI labeled disk;
>you need to use a standard SMI labeled disk.
>
>>The manual recommends using the entire disk for ZFS pools but root
>>files system pools have to be based on
>>slices (very confusing.) When converting to a ZFS root files sytem,
>>does it format the disk such that the
>>slice becomes the entire disk? Or do I have to do that myself?
>
>If you give a whole disk to ZFS, it will "label" the disk with a EFI
>label (wrong in your case).


QUESTION: Why is that wrong in his case? THANKS!


>
>If you use a "slice", ZFS will not change the size of the slice nor
>will it change the restof the disk.
>
>>Does the remaining 128 GB go to waste? Can I take the remaing 128 GB
>>and make it into a seperate
>>ZFS disk pool by changing the remaining disk space into one
>>partition? Should I do that?
>
>I think 8GB is on the "light" case; specifically, if you want to upgrade
>using live upgrade, then you want to have sufficient space to
>clone and upgrade. You can use compression (only using the default
>algorithm)
>
>>c0t1d0s0 doesn't start at cylinder 1. Swap does. If I use s0, and
>>ZFS doesn't expand the slice for me,
>>I have non-contiguous cylinders for the remaining 128 GB. If I want
>>to make a disk pool out of that space,
>>can I do it based on two non-contiguous slices? Is that a wise thing
>>to do?
>
>Not possible (well, it is possible but not for a root pool).
>
>A root pool needs to be:
>
> - in a slice, not a full disk

QUESTION: WHY?

> - a "simple" vdev or a mirror (two or more vdevs); not a
> concatination
> - compression is allowed but only using the default algorith,
>
>>If ZFS expands the slice to the entire disk, what happens to the my
>>mirror halfs on d5 and d6?
>
>It won't do that.
>
>>Should I remove SDS first and then attempt a Live upgrade after
>>repartitioning c0t1d0s0 to be the entire disk?

1: What's "SDS"? (inane question, yes, so THANK YOU)

2: That "best practices" zfs-document, I think it said something
about deliberately NOT using the entire disk -- use a little
less.

The reason was that if you wanted it in a mirror, or if it
crashed, you'd have to buy another EXACTLY THE SAME --
same manufacturer, same model, even same software upgrade.

(Not that I understand WHY the restriction, given just how
fancy, clever, ingenious, etc zfs seems to be.)

The reason for not using 100% of a disk is that maybe
the replacement disk had some bad blocks -- eg MORE
bad blocks than the original, or maybe the other side
of the mirror, I don't recall. Anyway, the new
disk would then look (be) *smaller* than the other,
ie NOT identical.

Something like that, I recall.

Now, if anyone can flesh that out, YOU can --
and much THANKS for that!

David



>
>I don't think you can "remove" SDS without losing the data.
>(I don't think you can downgrade SDS to a simple UFS filesystem)
>
>
>Casper
>--
>Expressed in this posting are my opinions. They are in no way related
>to opinions held by my employer, Sun Microsystems.
>Statements on Sun products included here are not gospel and may
>be fiction rather than truth.


From: David Combs on
In article <92e2b766-c3a3-41e7-ac98-f7def791b884(a)d23g2000vbm.googlegroups.com>,
ITguy <southallc(a)gmail.com> wrote:
>Here's what I'd do:
>
>1) For each SVM mirror, detach the submirror from c0t1d0
>2) Format c0t1d0 using 100% of the disk as slice 0.
>3) Create the root ZFS pool using c0t1d0s0
>4) Live upgrade to the ZFS pool, using a Zvol in the root pool for
>swap
>5) Create ZFS datasets in the root pool for /data and /data1
>6) Transfer files from /data and /data1 to the new ZFS datasets
>7) Activate the new BE and boot your new ZFS root
>8) If the new BE looks good, re-label c0t0d0 to match c0t1d0
>9) Add c0t0d0s0 as a mirror for the root pool.
>

1: Do you show any of this in vfstab?

2: If not, then is there some standard file, in /etc/, say,
where you *document* this stuff.

3: Ditto for the pools and zfs's and dir-trees also containing,
mounted here and there, other zfs's?

(Would such a tree consist of ONLY zfs's, ie would it
be best to NOT files and zfs's populating the same
tree (well, directly owned by the same "top level" zfs,
ie pool/zfs. Or would want to have a zfs contain only
EITHER files OR zfs's.

Like, what kinds of designs do (good) sysadmins typically
try to end up with?



THANKS!

David


From: Ian Collins on
David Combs wrote:
> In article <4ae1acf9$0$83234$e4fe514c(a)news.xs4all.nl>,
> Casper H.S. Dik <Casper.Dik(a)Sun.COM> wrote:
>> chuckers <chuckersjp(a)gmail.com> writes:
>>
>>> So I can do this:
>>> zpool create rpool c0t1d0s0
>>> (will add c0t0d0s0 as a mirror after upgrade.)
>>> zpool create datapool c0t1d0s1
>>> (will add c0t0d0s1 as a mirror after upgrade.)
>> I have a similar setup on some of my systems. I also upgrade from
>> a UFS "/" partition, before I already had a ZFS "data" pool.
>>
>>
> ....
>
> Naive question:
>
> Does that mean that before you upgraded you had a UFS
> root partition, and when you finished you had a
> zfs root pool, no more ufs?
>
> What advantage from having both ufs and zfs on the
> same computer ("system" is the current vocab, I guess?)?

Reverse the question: why would you want both on the same system?

--
Ian Collins
From: Ian Collins on
David Combs wrote:
> In article <4ae1ac77$0$83234$e4fe514c(a)news.xs4all.nl>,
> Casper H.S. Dik <Casper.Dik(a)Sun.COM> wrote:
>> chuckers <chuckersjp(a)gmail.com> writes:
>>
>>> So that goes back to an earlier question. Does it make sense to
>>> remove SDS from the disks and repartition
>>> c0t1d0s0 so that it is the whole disk and use that? That would make
>>> the root pool use the entire 132GB.
>>> Is that a good idea? How do I get my data off of c0t0d0s5 and
>>> c0t0d0s6 onto the root pool file system?
>> Possible, but perhaps you want the data in a different pool.
>>
>> I'm still using actual swap partitions/slices and I'm not using ZFS
>> for swap.
>>
>>> Is it better to have a seperate root pool and data pool? Or is
>>> tossing everything onto the root pool sufficient?
>>> Is that dependent on whether I have only 2 disks or lots and lots of
>>> disks?
>> I would use a rpool and different pools for data because an rpool can only
>> be a simple device or simple mirror.
>
> Casper, a question: What's the REASON for that rule, that restriction,
> for root-pools only?

Booting.

--
Ian Collins