From: ITguy on
> >1)  For each SVM mirror, detach the submirror from c0t1d0
> >2)  Format c0t1d0 using 100% of the disk as slice 0.
> >3)  Create the root ZFS pool using c0t1d0s0
> >4)  Live upgrade to the ZFS pool, using a Zvol in the root pool for
> >swap
> >5)  Create ZFS datasets in the root pool for /data and /data1
> >6)  Transfer files from /data and /data1 to the new ZFS datasets
> >7)  Activate the new BE and boot your new ZFS root
> >8)  If the new BE looks good, re-label c0t0d0 to match c0t1d0
> >9)  Add c0t0d0s0 as a mirror for the root pool.
>
> 1: Do you show any of this in vfstab?

If you want a copy for reference, back up your vfstab before you
begin:
# cp /etc/vfstab /etc/vfstab.ufs

Live upgrade will take care of vfstab updates, except for your data
partitions. You may want to change the mount points for the ZFS data
partitions just to keep the paths the same before and after the
upgrade. For example, let's say you create "rpool" for the live
upgrade, then create rpool/data and rpool/data2 for step 5 above. For
step 6, you could remove the vfstab entries for /data and /data1,
unmount /data and /data1, and transfer the data from the metadevices
to the new ZFS datasets:
# cd /rpool/data ; ufsdump 0f - | ufsrestore rf -
# cd /rpool/data1 ; ufsdump 0f - | ufsrestore rf -

Then modify the mountpoint property of the new ZFS to match the old
path:
# zfs set mountpoint=/data rpool/data
# zfs set mountpoint=/data1 rpool/data1


> 2: If not, then is there some standard file, in /etc/, say,
>    where you *document* this stuff.

The "zpool status" command is as good or better than vfstab for
keeping track of allocated disks.


> 3: Ditto for the pools and zfs's and dir-trees also containing,
>    mounted here and there, other zfs's?

The "zfs list" command displays all of this.


>    (Would such a tree consist of ONLY zfs's, ie would it
>     be best to NOT files and zfs's populating the same
>     tree (well, directly owned by the same "top level" zfs,
>     ie pool/zfs.  Or would want to have a zfs contain only
>     EITHER files OR zfs's.

The "zfs list" command will show ZFS datasets. When traversing the
file system, it's transparent which directories are plain old
directories or a new ZFS dataset, much like any directory could be
used for a UFS mount point.


>     Like, what kinds of designs do (good) sysadmins typically
>     try to end up with?

Dunno if I meet the criteria for a "good' sysadmin...

1) Create a slice using 100% of the the boot disk and use it for the
root pool.
1a) Mirror the root pool if another disk is available
1b) When sharing the root pool for data file systems, create a
reasonable reservation for the root ZFS file system

2) If you have more than 2 disks, mirror your root pool and create a
second pool for data.
2a) For data pools, do not slice the disks first - add entire
disks to the pool - cXtXdX

Keep the number of pools to a minimum. If you create multiple data
pools, you'll reduce the number of disks available to each pool,
limiting the total theoretical throughput. Just put all the disks in
the pool and let ZFS manage the I/O - that's what it is designed for.