From: ITguy on
Here's what I'd do:

1) For each SVM mirror, detach the submirror from c0t1d0
2) Format c0t1d0 using 100% of the disk as slice 0.
3) Create the root ZFS pool using c0t1d0s0
4) Live upgrade to the ZFS pool, using a Zvol in the root pool for
swap
5) Create ZFS datasets in the root pool for /data and /data1
6) Transfer files from /data and /data1 to the new ZFS datasets
7) Activate the new BE and boot your new ZFS root
8) If the new BE looks good, re-label c0t0d0 to match c0t1d0
9) Add c0t0d0s0 as a mirror for the root pool.

From: chuckers on
On Oct 23, 10:17 pm, Casper H.S. Dik <Casper....(a)Sun.COM> wrote:
> chuckers <chucker...(a)gmail.com> writes:
[edit]
> >and then if I need to add disk, I add them to the datapool rather than
> >the rpool.
> >Correct? I think my original question was whether it was possible to
> >have 2 slices on the same
> >disk with one in the root pool and one in a data pool with the ability
> >to add more disks to
> >the data pool. That won't impact on the booting from a single device
> >or a simple mirror rule, correct?
>
> Correct; the rpool is independent from other pools.
>

Thanks. I think I am getting a better handle on this as I do more
reading and more question asking.

One other thing:

If I have one slice associated with a root pool, rpool, and one slice
associated with a data pool, tank, and
the disks breaks, I will need to do something like the following:

{initial creation}
zpool create rpool c0t1d0s0
zpool create tank c0t1d0s1

{disk breaks, physically replaced}
zpool replace rpool c0t1d0s0
zpool replace tank c0t1d0s1

Correct? In other words, I can just issue a replace command against
the whole disk because seperate slices
are associated with seperate pools.

So, it would probably make sense to have 2 smallish disks for rpools
and a whole bunch of other available large disks
for data pools to keep them more manageable and less intradependant,
correct?
From: Casper H.S. Dik on
chuckers <chuckersjp(a)gmail.com> writes:

>One other thing:

>If I have one slice associated with a root pool, rpool, and one slice
>associated with a data pool, tank, and
>the disks breaks, I will need to do something like the following:

>{initial creation}
>zpool create rpool c0t1d0s0
>zpool create tank c0t1d0s1

And you later added a mirror?

>{disk breaks, physically replaced}
>zpool replace rpool c0t1d0s0
>zpool replace tank c0t1d0s1

Yes, but you'll need to format and label the disk first.

>Correct? In other words, I can just issue a replace command against

"you can't just issue"

>the whole disk because seperate slices
>are associated with seperate pools.

Format, label and two replace commands.

>So, it would probably make sense to have 2 smallish disks for rpools
>and a whole bunch of other available large disks

Except that for rpools you will need to fdisk/format/label and
create the rpool in that slice.

>for data pools to keep them more manageable and less intradependant,
>correct?

True, if you want to use at least 4 disks.

Casper
--
Expressed in this posting are my opinions. They are in no way related
to opinions held by my employer, Sun Microsystems.
Statements on Sun products included here are not gospel and may
be fiction rather than truth.
From: chuckers on
On Oct 26, 10:32 pm, Casper H.S. Dik <Casper....(a)Sun.COM> wrote:
> chuckers <chucker...(a)gmail.com> writes:
> >One other thing:
> >If I have one slice associated with a root pool, rpool, and one slice
> >associated with a data pool, tank, and
> >the disks breaks, I will need to do something like the following:
> >{initial creation}
> >zpool create rpool c0t1d0s0
> >zpool create tank c0t1d0s1
>
> And you later added a mirror?
>

Sorry, left that bit out. Yes. That was the plan.

> >{disk breaks, physically replaced}
> >zpool replace rpool c0t1d0s0
> >zpool replace tank c0t1d0s1
>
> Yes, but you'll need to format and label the disk first.
>
> >Correct? In other words, I can just issue a replace command against
>
> "you can't just issue"
>

Got to start proofing before posting.

> >the whole disk because seperate slices
> >are associated with seperate pools.
>
> Format, label and two replace commands.
>

Okay. So, basically, not a whole lot different than doing the same
things when using
SVM to replace a failed disk.

Now I just got to work up the nerve and scrounge up the time to try
this.

Thanks to everyone for the help!
From: David Combs on
In article <4ae1acf9$0$83234$e4fe514c(a)news.xs4all.nl>,
Casper H.S. Dik <Casper.Dik(a)Sun.COM> wrote:
>chuckers <chuckersjp(a)gmail.com> writes:
>
>>So I can do this:
>
>>zpool create rpool c0t1d0s0
>>(will add c0t0d0s0 as a mirror after upgrade.)
>>zpool create datapool c0t1d0s1
>>(will add c0t0d0s1 as a mirror after upgrade.)
>
>I have a similar setup on some of my systems. I also upgrade from
>a UFS "/" partition, before I already had a ZFS "data" pool.
>
>
....

Naive question:

Does that mean that before you upgraded you had a UFS
root partition, and when you finished you had a
zfs root pool, no more ufs?

What advantage from having both ufs and zfs on the
same computer ("system" is the current vocab, I guess?)?

Thanks,

David