From: chuckers on
We have a number of machines that are using SDS to mirror the disks
but are considering migrating them
to ZFS root file systems. I am in a bit of a quandry about how to do
this.

I have read over the ZFS Admin manual at http://dlc.sun.com/pdf/819-5461/819-5461.pdf
which says I will need
to do Live Upgrade to convert to a ZFS root file system which doesn't
seem like to big of a deal.

What has me confused is the fact that the root pool for ZFS has to be
done on a slice rather than the entire
disk (as stated on p. 127 of the manual above.)

Our current lay out is like this:

[root(a)myhost]# cat md.cf
# metadevice configuration file
# do not hand edit
d1 -m d11 d21 1
d11 1 1 c0t0d0s1
d21 1 1 c0t1d0s1
d6 -m d16 d26 1
d16 1 1 c0t0d0s6
d26 1 1 c0t1d0s6
d5 -m d15 d25 1
d15 1 1 c0t0d0s5
d25 1 1 c0t1d0s5
d0 -m d10 d20 1
d10 1 1 c0t0d0s0
d20 1 1 c0t1d0s0

[root(a)myhost] # df -h
Filesystem size used avail capacity Mounted on
/dev/md/dsk/d0 7.9G 2.8G 5.0G 36% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 60G 872K 60G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
sharefs 0K 0K 0K 0% /etc/dfs/sharetab
/usr/lib/libc/libc_hwcap1.so.1
7.9G 2.8G 5.0G 36% /lib/libc.so.1
fd 0K 0K 0K 0% /dev/fd
swap 60G 0K 60G 0% /tmp
swap 60G 20K 60G 1% /var/run
/dev/md/dsk/d5 20G 22M 19G 1% /data
/dev/md/dsk/d6 87G 64M 86G 1% /data1

I am considering using c0t1d0s0 as the initial slice to migrate to.
c0t1d0s0 is currently only 8 GB out of a max 136 GB. Would my root
pool only ever be 8 GB?

The manual recommends using the entire disk for ZFS pools but root
files system pools have to be based on
slices (very confusing.) When converting to a ZFS root files sytem,
does it format the disk such that the
slice becomes the entire disk? Or do I have to do that myself?

Does the remaining 128 GB go to waste? Can I take the remaing 128 GB
and make it into a seperate
ZFS disk pool by changing the remaining disk space into one
partition? Should I do that?

Another complication:

Part Tag Flag Cylinders Size Blocks
0 root wm 2612 - 3656 8.01GB (1045/0/0)
16787925
1 swap wu 1 - 2611 20.00GB (2611/0/0)
41945715
2 backup wm 0 - 17844 136.70GB (17845/0/0)
286679925
3 unassigned wm 0 0
(0/0/0) 0
4 unassigned wm 0 0
(0/0/0) 0
5 unassigned wm 3657 - 6267 20.00GB (2611/0/0)
41945715
6 unassigned wm 6268 - 17837 88.63GB (11570/0/0)
185872050
7 unassigned wm 17838 - 17844 54.91MB (7/0/0)
112455
8 boot wu 0 - 0 7.84MB (1/0/0)
16065
9 unassigned wm 0 0
(0/0/0) 0

c0t1d0s0 doesn't start at cylinder 1. Swap does. If I use s0, and
ZFS doesn't expand the slice for me,
I have non-contiguous cylinders for the remaining 128 GB. If I want
to make a disk pool out of that space,
can I do it based on two non-contiguous slices? Is that a wise thing
to do?

If ZFS expands the slice to the entire disk, what happens to the my
mirror halfs on d5 and d6?

Should I remove SDS first and then attempt a Live upgrade after
repartitioning c0t1d0s0 to be the entire disk?
If so, how do I get my info from /data and /data1 onto the ZFS side of
things? (This example is based on
a freshly installed machine with nothing in those dirs at the moment
but we do have machines with lots of
data we can't afford to lose.)

I have only ever seen ZFS root file systems listed in the manual with
zfs list command which lists things like:

NAME USED AVAIL REFER MOUNTPOINT
rpool 7.26G 59.7G 98K /rpool
rpool/ROOT 4.64G 59.7G 21K legacy
rpool/ROOT/zfs1009BE 4.64G 59.7G 4.64G /
rpool/dump 1.00G 59.7G 1.00G -
rpool/export 44K 59.7G 23K /export
rpool/export/home 21K 59.7G 21K /export/home
rpool/swap 1G 60.7G 16K -
rpool/zones 633M 59.7G 633M /rpool/zones

Would it be correct to assume that commands like df -h would show up
as "normal"? (Not sure how I define
normal. The ROOT and legacy words are basically scaring me, that's
all. What does df -h output?)

I hope this is coherent. Too many questions in my head about this
trying to get out at once.

Any help is appreciated.

Thanks in advance.
From: webjuan on
On Oct 22, 9:17 pm, chuckers <chucker...(a)gmail.com> wrote:
> We have a number of machines that are using SDS to mirror the disks
> but are considering migrating them
> to ZFS root file systems.  I am in a bit of a quandry about how to do
> this.
>
> I have read over the ZFS Admin manual athttp://dlc.sun.com/pdf/819-5461/819-5461.pdf
> which says I will need
> to do Live Upgrade to convert to a ZFS root file system which doesn't
> seem like to big of a deal.
>
> What has me confused is the fact that the root pool for ZFS has to be
> done on a slice rather than the entire
> disk (as stated on p. 127 of the manual above.)
>

As of this email, in order to boot a Solaris host, the disk partition
layout needs to be labeled as SMI, aka, slices 0 thru 7 with slice 2
being the entire drive. When creating a ZFS pool, its recommended
that you use the entire disk. Doing so labels the drive as EFI. You
will notice that EFI labels creates an additional slice 8 and no
longer uses slice 2 for the entire disk. By labeling the drive as
EFI, manual or otherwise, the drive is no longer bootable.

# format -e c0t0d0
....
partition> label
[0] SMI Label
[1] EFI Label
Specify Label type[1]: 0
Warning: This disk has an EFI label. Changing to SMI label will erase
all
current partitions.
Continue? y
partition> q


> Our current lay out is like this:
>
> [root(a)myhost]# cat md.cf
> # metadevice configuration file
> # do not hand edit
> d1 -m d11 d21 1
> d11 1 1 c0t0d0s1
> d21 1 1 c0t1d0s1
> d6 -m d16 d26 1
> d16 1 1 c0t0d0s6
> d26 1 1 c0t1d0s6
> d5 -m d15 d25 1
> d15 1 1 c0t0d0s5
> d25 1 1 c0t1d0s5
> d0 -m d10 d20 1
> d10 1 1 c0t0d0s0
> d20 1 1 c0t1d0s0
>
> [root(a)myhost] # df -h
> Filesystem             size   used  avail capacity  Mounted on
> /dev/md/dsk/d0         7.9G   2.8G   5.0G    36%    /
> /devices                 0K     0K     0K     0%    /devices
> ctfs                     0K     0K     0K     0%    /system/contract
> proc                     0K     0K     0K     0%    /proc
> mnttab                   0K     0K     0K     0%    /etc/mnttab
> swap                    60G   872K    60G     1%    /etc/svc/volatile
> objfs                    0K     0K     0K     0%    /system/object
> sharefs                  0K     0K     0K     0%    /etc/dfs/sharetab
> /usr/lib/libc/libc_hwcap1.so.1
>                        7.9G   2.8G   5.0G    36%    /lib/libc.so.1
> fd                       0K     0K     0K     0%    /dev/fd
> swap                    60G     0K    60G     0%    /tmp
> swap                    60G    20K    60G     1%    /var/run
> /dev/md/dsk/d5          20G    22M    19G     1%    /data
> /dev/md/dsk/d6          87G    64M    86G     1%    /data1
>
> I am considering using c0t1d0s0 as the initial slice to migrate to.
> c0t1d0s0 is currently only 8 GB out of a max 136 GB.  Would my root
> pool only ever be 8 GB?
>

You can attach other disks and or slices to the root pool to expand
the disk space.

> The manual recommends using the entire disk for ZFS pools but root
> files system pools have to be based on
> slices (very confusing.)  

See SMI vs EFI above.

When converting to a ZFS root files sytem,
> does it format the disk such that the
> slice becomes the entire disk?  Or do I have to do that myself?

You have to do that yourself since it uses the SMI label, aka, slices.

>
> Does the remaining 128 GB go to waste?  Can I take the remaing 128 GB
> and make it into a seperate
> ZFS disk pool by changing the remaining disk space into one
> partition?  Should I do that?
>

You can create another pool with the remaining disk space. Just
create another slice and use that slice to create the pool. However,
thats not advisable since it makes it for an administration nightmare
down the road.

> Another complication:
>
> Part      Tag    Flag     Cylinders         Size            Blocks
>   0       root    wm    2612 -  3656        8.01GB    (1045/0/0)
> 16787925
>   1       swap    wu       1 -  2611       20.00GB    (2611/0/0)
> 41945715
>   2     backup    wm       0 - 17844      136.70GB    (17845/0/0)
> 286679925
>   3 unassigned    wm       0                0
> (0/0/0)             0
>   4 unassigned    wm       0                0
> (0/0/0)             0
>   5 unassigned    wm    3657 -  6267       20.00GB    (2611/0/0)
> 41945715
>   6 unassigned    wm    6268 - 17837       88.63GB    (11570/0/0)
> 185872050
>   7 unassigned    wm   17838 - 17844       54.91MB    (7/0/0)
> 112455
>   8       boot    wu       0 -     0        7.84MB    (1/0/0)
> 16065
>   9 unassigned    wm       0                0
> (0/0/0)             0
>
> c0t1d0s0 doesn't start at cylinder 1.  Swap does.  If I use s0, and
> ZFS doesn't expand the slice for me,
> I have non-contiguous cylinders for the remaining 128 GB.  If I want
> to make a disk pool out of that space,
> can I do it based on two non-contiguous slices?  Is that a wise thing
> to do?
>
> If ZFS expands the slice to the entire disk, what happens to the my
> mirror halfs on d5 and d6?
>
> Should I remove SDS first and then attempt a Live upgrade after
> repartitioning c0t1d0s0 to be the entire disk?
> If so, how do I get my info from /data and /data1 onto the ZFS side of
> things?  (This example is based on
> a freshly installed machine with nothing in those dirs at the moment
> but we do have machines with lots of
> data we can't afford to lose.)
>
> I have only ever seen ZFS root file systems listed in the manual with
> zfs list command which lists things like:
>
> NAME USED AVAIL REFER MOUNTPOINT
> rpool 7.26G 59.7G 98K /rpool
> rpool/ROOT 4.64G 59.7G 21K legacy
> rpool/ROOT/zfs1009BE 4.64G 59.7G 4.64G /
> rpool/dump 1.00G 59.7G 1.00G -
> rpool/export 44K 59.7G 23K /export
> rpool/export/home 21K 59.7G 21K /export/home
> rpool/swap 1G 60.7G 16K -
> rpool/zones 633M 59.7G 633M /rpool/zones
>
> Would it be correct to assume that commands like df -h would show up
> as "normal"?  (Not sure how I define
> normal.  

Yes, you can see all pools using df -h

The ROOT and legacy words are basically scaring me, that's
> all.  What does df -h output?)
>
> I hope this is coherent.  Too many questions in my head about this
> trying to get out at once.
>

Agreed, too many questions and no easy answer for all but I guess you
have to start somewhere.

> Any help is appreciated.
>
> Thanks in advance.

From: Ian Collins on
webjuan wrote:
> On Oct 22, 9:17 pm, chuckers <chucker...(a)gmail.com> wrote:
>>
>> I am considering using c0t1d0s0 as the initial slice to migrate to.
>> c0t1d0s0 is currently only 8 GB out of a max 136 GB. Would my root
>> pool only ever be 8 GB?
>>
>
> You can attach other disks and or slices to the root pool to expand
> the disk space.

Almost. A root pool can only be a single device or simple mirror.

What you could do is use s0 of one drive for your new pool, migrate over
and then merge s1 into s0 on the second drive and mirror with that.
Once the mirror is resivered, detach the original drive, merge s1 into
s0 and re-mirror.

Messy, but 8GB really is too small to support upgrades.

>> Does the remaining 128 GB go to waste? Can I take the remaing 128 GB
>> and make it into a seperate
>> ZFS disk pool by changing the remaining disk space into one
>> partition? Should I do that?
>>
>
> You can create another pool with the remaining disk space. Just
> create another slice and use that slice to create the pool. However,
> thats not advisable since it makes it for an administration nightmare
> down the road.

How so?

--
Ian Collins
From: chuckers on
On Oct 23, 11:10 am, Ian Collins <ian-n...(a)hotmail.com> wrote:
> webjuan wrote:
> > On Oct 22, 9:17 pm, chuckers <chucker...(a)gmail.com> wrote:
>
> >> I am considering using c0t1d0s0 as the initial slice to migrate to.
> >> c0t1d0s0 is currently only 8 GB out of a max 136 GB. Would my root
> >> pool only ever be 8 GB?
>
> > You can attach other disks and or slices to the root pool to expand
> > the disk space.
>
> Almost. A root pool can only be a single device or simple mirror.
>
> What you could do is use s0 of one drive for your new pool, migrate over
> and then merge s1 into s0 on the second drive and mirror with that.
> Once the mirror is resivered, detach the original drive, merge s1 into
> s0 and re-mirror.
>
> Messy, but 8GB really is too small to support upgrades.
>

So that goes back to an earlier question. Does it make sense to
remove SDS from the disks and repartition
c0t1d0s0 so that it is the whole disk and use that? That would make
the root pool use the entire 132GB.
Is that a good idea? How do I get my data off of c0t0d0s5 and
c0t0d0s6 onto the root pool file system?

Can the root pool have, say 1 slice of 66GB (half the disk) and the
other slice be allocated for another disk pool (say tank)?
Can I then add other disks to the tank pool and still be bootable off
the root pool? Is that even a good idea?

Is it better to have a seperate root pool and data pool? Or is
tossing everything onto the root pool sufficient?
Is that dependent on whether I have only 2 disks or lots and lots of
disks?
From: Ian Collins on
chuckers wrote:
> On Oct 23, 11:10 am, Ian Collins <ian-n...(a)hotmail.com> wrote:
>> webjuan wrote:
>>> On Oct 22, 9:17 pm, chuckers <chucker...(a)gmail.com> wrote:
>>>> I am considering using c0t1d0s0 as the initial slice to migrate to.
>>>> c0t1d0s0 is currently only 8 GB out of a max 136 GB. Would my root
>>>> pool only ever be 8 GB?
>>> You can attach other disks and or slices to the root pool to expand
>>> the disk space.
>> Almost. A root pool can only be a single device or simple mirror.
>>
>> What you could do is use s0 of one drive for your new pool, migrate over
>> and then merge s1 into s0 on the second drive and mirror with that.
>> Once the mirror is resivered, detach the original drive, merge s1 into
>> s0 and re-mirror.
>>
>> Messy, but 8GB really is too small to support upgrades.
>>
>
> So that goes back to an earlier question. Does it make sense to
> remove SDS from the disks and repartition
> c0t1d0s0 so that it is the whole disk and use that? That would make
> the root pool use the entire 132GB.

I assume you mean SVM? Using the whole disk would make your task a lot
easier.

> Is that a good idea? How do I get my data off of c0t0d0s5 and
> c0t0d0s6 onto the root pool file system?

Yes, that probably would be a better idea.

Just split the mirrors (metadetach one side) and destroy the detached
meta device.

Once you have your new pool, just copy the data over from the (now
single sided) mirrors.

> Can the root pool have, say 1 slice of 66GB (half the disk) and the
> other slice be allocated for another disk pool (say tank)?

No, as I said, you can only boot from a single device or a simple mirror.

> Can I then add other disks to the tank pool and still be bootable off
> the root pool? Is that even a good idea?

See above.

> Is it better to have a seperate root pool and data pool? Or is
> tossing everything onto the root pool sufficient?

That depends on your needs. If your data is likely to grow, splitting
the drive into two slices (say 32GB and the rest) one for the root pool
and one for data would be the best option. You could then attach
another device to the second pool to expand it when required.

> Is that dependent on whether I have only 2 disks or lots and lots of
> disks?

You may only have two now, but will you add more later?

--
Ian Collins