From: Ian Collins on
David Combs wrote:
> In article <4ae1801f$0$83232$e4fe514c(a)news.xs4all.nl>,
> Casper H.S. Dik <Casper.Dik(a)Sun.COM> wrote:
>> chuckers <chuckersjp(a)gmail.com> writes:
>>
>>> What has me confused is the fact that the root pool for ZFS has to be
>>> done on a slice rather than the entire
>>> disk (as stated on p. 127 of the manual above.)
>> The reason is that it's not possible to boot from EFI labeled disk;
>> you need to use a standard SMI labeled disk.
>>
>>> The manual recommends using the entire disk for ZFS pools but root
>>> files system pools have to be based on
>>> slices (very confusing.) When converting to a ZFS root files sytem,
>>> does it format the disk such that the
>>> slice becomes the entire disk? Or do I have to do that myself?
>> If you give a whole disk to ZFS, it will "label" the disk with a EFI
>> label (wrong in your case).
>
>
> QUESTION: Why is that wrong in his case? THANKS!

You won't find a BIOS that can boot from an EFI labelled disk, unless
you are Mac user.

>> A root pool needs to be:
>>
>> - in a slice, not a full disk
>
> QUESTION: WHY?
>
Booting.

>> - a "simple" vdev or a mirror (two or more vdevs); not a
>> concatination
>> - compression is allowed but only using the default algorith,
>>
>>> If ZFS expands the slice to the entire disk, what happens to the my
>>> mirror halfs on d5 and d6?
>> It won't do that.
>>
>>> Should I remove SDS first and then attempt a Live upgrade after
>>> repartitioning c0t1d0s0 to be the entire disk?
>
> 1: What's "SDS"? (inane question, yes, so THANK YOU)

The old acronym for SVM.

> 2: That "best practices" zfs-document, I think it said something
> about deliberately NOT using the entire disk -- use a little
> less.
>
> The reason was that if you wanted it in a mirror, or if it
> crashed, you'd have to buy another EXACTLY THE SAME --
> same manufacturer, same model, even same software upgrade.

As long as the second drive is at least a big as the first, you are OK.

> (Not that I understand WHY the restriction, given just how
> fancy, clever, ingenious, etc zfs seems to be.)

It's easier to implement that way. What happens if your drive is full
and you attempt to mirror with a smaller device? Mirroring to a smaller
device is apparently in the works.

--
Ian Collins
From: Ian Collins on
David Combs wrote:
> In article <92e2b766-c3a3-41e7-ac98-f7def791b884(a)d23g2000vbm.googlegroups.com>,
> ITguy <southallc(a)gmail.com> wrote:
>> Here's what I'd do:
>>
>> 1) For each SVM mirror, detach the submirror from c0t1d0
>> 2) Format c0t1d0 using 100% of the disk as slice 0.
>> 3) Create the root ZFS pool using c0t1d0s0
>> 4) Live upgrade to the ZFS pool, using a Zvol in the root pool for
>> swap
>> 5) Create ZFS datasets in the root pool for /data and /data1
>> 6) Transfer files from /data and /data1 to the new ZFS datasets
>> 7) Activate the new BE and boot your new ZFS root
>> 8) If the new BE looks good, re-label c0t0d0 to match c0t1d0
>> 9) Add c0t0d0s0 as a mirror for the root pool.
>>
>
> 1: Do you show any of this in vfstab?

Any of what?

> 2: If not, then is there some standard file, in /etc/, say,
> where you *document* this stuff.

The mountpoint is an attribute of the filesystem. All ZFS filesystem
attributes are associated with their filesystem. Use "zfs get" to see
them and "zfs set" to change them.

> 3: Ditto for the pools and zfs's and dir-trees also containing,
> mounted here and there, other zfs's?

You use the ZFS commands to access this stuff. "zfs list -r" shows
nested filesystem hierarchies.

> (Would such a tree consist of ONLY zfs's, ie would it
> be best to NOT files and zfs's populating the same
> tree (well, directly owned by the same "top level" zfs,
> ie pool/zfs. Or would want to have a zfs contain only
> EITHER files OR zfs's.

That doesn't parse.

Data is files, files live in directories and directories live in other
directories or filesystems. Filesystems live in other filesystems or pools.

--
Ian Collins
From: Richard B. Gilbert on
Ian Collins wrote:
> David Combs wrote:
>> In article <4ae1acf9$0$83234$e4fe514c(a)news.xs4all.nl>,
>> Casper H.S. Dik <Casper.Dik(a)Sun.COM> wrote:
>>> chuckers <chuckersjp(a)gmail.com> writes:
>>>
>>>> So I can do this:
>>>> zpool create rpool c0t1d0s0
>>>> (will add c0t0d0s0 as a mirror after upgrade.)
>>>> zpool create datapool c0t1d0s1
>>>> (will add c0t0d0s1 as a mirror after upgrade.)
>>> I have a similar setup on some of my systems. I also upgrade from
>>> a UFS "/" partition, before I already had a ZFS "data" pool.
>>>
>>>
>> ....
>>
>> Naive question:
>>
>> Does that mean that before you upgraded you had a UFS
>> root partition, and when you finished you had a
>> zfs root pool, no more ufs?
>>
>> What advantage from having both ufs and zfs on the
>> same computer ("system" is the current vocab, I guess?)?
>
> Reverse the question: why would you want both on the same system?
>

Because there is no "zfsbackup"?
From: Richard B. Gilbert on
Ian Collins wrote:
> David Combs wrote:
>> In article
>> <92e2b766-c3a3-41e7-ac98-f7def791b884(a)d23g2000vbm.googlegroups.com>,
>> ITguy <southallc(a)gmail.com> wrote:
>>> Here's what I'd do:
>>>
>>> 1) For each SVM mirror, detach the submirror from c0t1d0
>>> 2) Format c0t1d0 using 100% of the disk as slice 0.
>>> 3) Create the root ZFS pool using c0t1d0s0
>>> 4) Live upgrade to the ZFS pool, using a Zvol in the root pool for
>>> swap
>>> 5) Create ZFS datasets in the root pool for /data and /data1
>>> 6) Transfer files from /data and /data1 to the new ZFS datasets
>>> 7) Activate the new BE and boot your new ZFS root
>>> 8) If the new BE looks good, re-label c0t0d0 to match c0t1d0
>>> 9) Add c0t0d0s0 as a mirror for the root pool.
>>>
>>
>> 1: Do you show any of this in vfstab?
>
> Any of what?
>
>> 2: If not, then is there some standard file, in /etc/, say,
>> where you *document* this stuff.
>
> The mountpoint is an attribute of the filesystem. All ZFS filesystem
> attributes are associated with their filesystem. Use "zfs get" to see
> them and "zfs set" to change them.
>
>> 3: Ditto for the pools and zfs's and dir-trees also containing,
>> mounted here and there, other zfs's?
>
> You use the ZFS commands to access this stuff. "zfs list -r" shows
> nested filesystem hierarchies.
>
>> (Would such a tree consist of ONLY zfs's, ie would it
>> be best to NOT files and zfs's populating the same
>> tree (well, directly owned by the same "top level" zfs,
>> ie pool/zfs. Or would want to have a zfs contain only
>> EITHER files OR zfs's.
>
> That doesn't parse.
>
> Data is files, files live in directories and directories live in other
> directories or filesystems. Filesystems live in other filesystems or
> pools.
>

That doesn't parse either! Directories ARE files and also live on a disk.

The "file system" tells you how all those disk blocks are organized;
which are in use, which are free, who "owns" each file, etc, etc.
From: hume.spamfilter on
Richard B. Gilbert <rgilbert88(a)comcast.net> wrote:
> That doesn't parse either! Directories ARE files and also live on a disk.

That's being ridiculously pedantic, I'd say.

--
Brandon Hume - hume -> BOFH.Ca, http://WWW.BOFH.Ca/