From: chuckers on
On Oct 23, 12:28 pm, Ian Collins <ian-n...(a)hotmail.com> wrote:
> chuckers wrote:
> > On Oct 23, 11:10 am, Ian Collins <ian-n...(a)hotmail.com> wrote:
> >> webjuan wrote:
[edit]
> > So that goes back to an earlier question. Does it make sense to
> > remove SDS from the disks and repartition
> > c0t1d0s0 so that it is the whole disk and use that? That would make
> > the root pool use the entire 132GB.
>
> I assume you mean SVM? Using the whole disk would make your task a lot
> easier.
>

Sorry, yes, SVM. Old dog, new tricks. ;-)

> > Can the root pool have, say 1 slice of 66GB (half the disk) and the
> > other slice be allocated for another disk pool (say tank)?
>
> No, as I said, you can only boot from a single device or a simple mirror.
>
> > Can I then add other disks to the tank pool and still be bootable off
> > the root pool? Is that even a good idea?
>
> See above.
>
> > Is it better to have a seperate root pool and data pool? Or is
> > tossing everything onto the root pool sufficient?
>
> That depends on your needs. If your data is likely to grow, splitting
> the drive into two slices (say 32GB and the rest) one for the root pool
> and one for data would be the best option. You could then attach
> another device to the second pool to expand it when required.
>

So I can do this:

zpool create rpool c0t1d0s0
(will add c0t0d0s0 as a mirror after upgrade.)
zpool create datapool c0t1d0s1
(will add c0t0d0s1 as a mirror after upgrade.)

and then if I need to add disk, I add them to the datapool rather than
the rpool.
Correct? I think my original question was whether it was possible to
have 2 slices on the same
disk with one in the root pool and one in a data pool with the ability
to add more disks to
the data pool. That won't impact on the booting from a single device
or a simple mirror rule, correct?


> > Is that dependent on whether I have only 2 disks or lots and lots of
> > disks?
>
> You may only have two now, but will you add more later?


Actually, I already have more disks. I am just trying to keep the
example simple so I can understand what's
going on before screwing it up.
From: Casper H.S. Dik on
chuckers <chuckersjp(a)gmail.com> writes:

>What has me confused is the fact that the root pool for ZFS has to be
>done on a slice rather than the entire
>disk (as stated on p. 127 of the manual above.)

The reason is that it's not possible to boot from EFI labeled disk;
you need to use a standard SMI labeled disk.

>The manual recommends using the entire disk for ZFS pools but root
>files system pools have to be based on
>slices (very confusing.) When converting to a ZFS root files sytem,
>does it format the disk such that the
>slice becomes the entire disk? Or do I have to do that myself?

If you give a whole disk to ZFS, it will "label" the disk with a EFI
label (wrong in your case).

If you use a "slice", ZFS will not change the size of the slice nor
will it change the restof the disk.

>Does the remaining 128 GB go to waste? Can I take the remaing 128 GB
>and make it into a seperate
>ZFS disk pool by changing the remaining disk space into one
>partition? Should I do that?

I think 8GB is on the "light" case; specifically, if you want to upgrade
using live upgrade, then you want to have sufficient space to
clone and upgrade. You can use compression (only using the default
algorithm)

>c0t1d0s0 doesn't start at cylinder 1. Swap does. If I use s0, and
>ZFS doesn't expand the slice for me,
>I have non-contiguous cylinders for the remaining 128 GB. If I want
>to make a disk pool out of that space,
>can I do it based on two non-contiguous slices? Is that a wise thing
>to do?

Not possible (well, it is possible but not for a root pool).

A root pool needs to be:

- in a slice, not a full disk
- a "simple" vdev or a mirror (two or more vdevs); not a
concatination
- compression is allowed but only using the default algorith,

>If ZFS expands the slice to the entire disk, what happens to the my
>mirror halfs on d5 and d6?

It won't do that.

>Should I remove SDS first and then attempt a Live upgrade after
>repartitioning c0t1d0s0 to be the entire disk?

I don't think you can "remove" SDS without losing the data.
(I don't think you can downgrade SDS to a simple UFS filesystem)


Casper
--
Expressed in this posting are my opinions. They are in no way related
to opinions held by my employer, Sun Microsystems.
Statements on Sun products included here are not gospel and may
be fiction rather than truth.
From: Casper H.S. Dik on
chuckers <chuckersjp(a)gmail.com> writes:

>So that goes back to an earlier question. Does it make sense to
>remove SDS from the disks and repartition
>c0t1d0s0 so that it is the whole disk and use that? That would make
>the root pool use the entire 132GB.
>Is that a good idea? How do I get my data off of c0t0d0s5 and
>c0t0d0s6 onto the root pool file system?

Possible, but perhaps you want the data in a different pool.

I'm still using actual swap partitions/slices and I'm not using ZFS
for swap.

>Is it better to have a seperate root pool and data pool? Or is
>tossing everything onto the root pool sufficient?
>Is that dependent on whether I have only 2 disks or lots and lots of
>disks?

I would use a rpool and different pools for data because an rpool can only
be a simple device or simple mirror.

The other pools can be raidz{,2,3} and they can be concatenated and you can
any form of compression.

Casper
--
Expressed in this posting are my opinions. They are in no way related
to opinions held by my employer, Sun Microsystems.
Statements on Sun products included here are not gospel and may
be fiction rather than truth.
From: Casper H.S. Dik on
chuckers <chuckersjp(a)gmail.com> writes:

>So I can do this:

>zpool create rpool c0t1d0s0
>(will add c0t0d0s0 as a mirror after upgrade.)
>zpool create datapool c0t1d0s1
>(will add c0t0d0s1 as a mirror after upgrade.)

I have a similar setup on some of my systems. I also upgrade from
a UFS "/" partition, before I already had a ZFS "data" pool.


>and then if I need to add disk, I add them to the datapool rather than
>the rpool.
>Correct? I think my original question was whether it was possible to
>have 2 slices on the same
>disk with one in the root pool and one in a data pool with the ability
>to add more disks to
>the data pool. That won't impact on the booting from a single device
>or a simple mirror rule, correct?

Correct; the rpool is independent from other pools.

Casper
--
Expressed in this posting are my opinions. They are in no way related
to opinions held by my employer, Sun Microsystems.
Statements on Sun products included here are not gospel and may
be fiction rather than truth.
From: Thomas Maier-Komor on
chuckers schrieb:
> On Oct 23, 12:28 pm, Ian Collins <ian-n...(a)hotmail.com> wrote:
>> chuckers wrote:
>>> On Oct 23, 11:10 am, Ian Collins <ian-n...(a)hotmail.com> wrote:
>>>> webjuan wrote:
> [edit]
>>> So that goes back to an earlier question. Does it make sense to
>>> remove SDS from the disks and repartition
>>> c0t1d0s0 so that it is the whole disk and use that? That would make
>>> the root pool use the entire 132GB.
>> I assume you mean SVM? Using the whole disk would make your task a lot
>> easier.
>>
>
> Sorry, yes, SVM. Old dog, new tricks. ;-)
>
>>> Can the root pool have, say 1 slice of 66GB (half the disk) and the
>>> other slice be allocated for another disk pool (say tank)?
>> No, as I said, you can only boot from a single device or a simple mirror.
>>
>>> Can I then add other disks to the tank pool and still be bootable off
>>> the root pool? Is that even a good idea?
>> See above.
>>
>>> Is it better to have a seperate root pool and data pool? Or is
>>> tossing everything onto the root pool sufficient?
>> That depends on your needs. If your data is likely to grow, splitting
>> the drive into two slices (say 32GB and the rest) one for the root pool
>> and one for data would be the best option. You could then attach
>> another device to the second pool to expand it when required.
>>
>
> So I can do this:
>
> zpool create rpool c0t1d0s0
> (will add c0t0d0s0 as a mirror after upgrade.)
> zpool create datapool c0t1d0s1
> (will add c0t0d0s1 as a mirror after upgrade.)
>
> and then if I need to add disk, I add them to the datapool rather than
> the rpool.
> Correct? I think my original question was whether it was possible to
> have 2 slices on the same
> disk with one in the root pool and one in a data pool with the ability
> to add more disks to
> the data pool. That won't impact on the booting from a single device
> or a simple mirror rule, correct?
>
>
>>> Is that dependent on whether I have only 2 disks or lots and lots of
>>> disks?
>> You may only have two now, but will you add more later?
>
>
> Actually, I already have more disks. I am just trying to keep the
> example simple so I can understand what's
> going on before screwing it up.

keep in mind that if you add a mirror to a zfs root pool, you'll have to
do an installboot on the new device, otherwise you won't be able to boot
from the mirror, when the original device dies...

- Thomas