From: rocker on
On Dec 16, 2:19 pm, cindy <cindy.swearin...(a)sun.com> wrote:
> On Dec 16, 11:03 am, rocker <rocke.robert...(a)pwgsc.gc.ca> wrote:
>
>
>
>
>
> > On Dec 16, 10:59 am, cindy <cindy.swearin...(a)sun.com> wrote:
>
> > > On Dec 16, 7:48 am, rocker <rocke.robert...(a)pwgsc.gc.ca> wrote:
>
> > > > Hi
>
> > > > I have mirrored the two internal disks on a V490 for my rpool. I want
> > > > to take a SAN LUN which is the same size as the internal disks, add it
> > > > to the mirror, remove the two internal disks from the mirror
> > > > effectively breaking the mirror, simply making the SAN LUN as my
> > > > rpool.
>
> > > > Will the below process work:
>
> > > > Again rpool is an existing mirror using 2 internal disks
>
> > > > add SAN LUN to rpool
>
> > > > # zpool attach rpool c4t60060E800547670000004767000003D8d0
>
> > > > 3 way mirror created... so now wait for it to resilver
>
> > > > remove internal disks
>
> > > > # zpool detach rpool c1t1d0s0
> > > > # zpool detach rpool c2t1d0s0
>
> > > > Setup OBP boot-device boot string. Will rpool be bootable if I do
> > > > this?
>
> > > > Thank you
>
> > > Hi Rocker,
>
> > > A few things to think about/to do:
>
> > > 1. I'm not so good with SAN-level stuff, but I thought not all SAN
> > > disks are bootable
> > > so be sure that they are.
>
> > > 2. The SAN disks will need an SMI label and a slice 0. When you attach
> > > them, you'll
> > > need to specify the slice identifier. For example, on my 3510 ARRAY,
> > > and I don't think
> > > these disks are bootable, the syntax would look like this:
>
> > > # zpool attach rpool c0t0d0s0 c1t226000C0FFA001ABd7s0
>
> > > Your syntax above is missing the s0.
>
> > > 3. Apply boot blocks to the SAN disks after they are attached and have
> > > resilvered
> > > successfully.
>
> > > 4. Confirm that the SAN disks boot successfully before you de-attach
> > > your internal disks.
>
> > > See this section for more examples:http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Gu...
>
> > > Thanks,
>
> > > Cindy- Hide quoted text -
>
> > > - Show quoted text -
>
> > Hi Cindy
>
> > We've booted of SAN for a while and it works great. HDS & EMC are
> > fine, IBM pretty much doesn't work with the Sun's or HP's. Lovely ;D
>
> > Re. slice 0, I thought it was assumed, but I will add that to my
> > process.
>
> > Re. SAN, it's funny when I try and make an alternate BE for DR
> > purposes. I change the disk label to SMI, create the rpool2, and
> > lucreate. Lucreate fails with the usual:
>
> > # lucreate -n new-zfsBE-1 -p rpool2
> > Analyzing system configuration.
> > ERROR: ZFS pool <rpool1> does not support boot environments
>
> > I go back and check SAN disk, and the label is set back to EFI? If I
> > create a BE on a local disk, it works fine once I set the label to
> > SMI. Why or how on earth does the label change on the HDS SAN LUN when
> > I do a lucreate? FC drivers?
>
> > Sorry, this one is off topic but I had to ask / rant.
>
> > Cheers
>
> The above lucreate ERROR is pointing to rpool1, not rpool2. Weird.
>
> My SAN disk experience is nil but I would like to see the syntax that
> you
> used to create rpool2 and rpool1. I assuming you have root pools on
> two
> different disks. Even if the disk label is SMI, you still a need a
> slice
> from which to boot.
>
> Cindy- Hide quoted text -
>
> - Show quoted text -

Sorry, I've been using rpool1 and rpool2 interchangeably for a while
for my testing. They map to either my local disk or my HDS SAN LUN. I
think I see what I was doing wrong. The old slice issue. I forgot to
give it a slice.
So here I create a new rpool3, using the SAN device.


# zpool create rpool3 c1t50060E8005476724d1s6

Create the BE

# lucreate -n new-zfsBE3 -p rpool3
Analyzing system configuration.
Comparing source boot environment <s10s_u7wos_08> file systems with
the
file system(s) you specified for the new boot environment.
Determining
which file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device </dev/dsk/c1t50060E8005476724d1s6> is not a root device for
any boot environment; cannot get BE ID.
Creating configuration for boot environment <new-zfsBE3>.
Source boot environment is <s10s_u7wos_08>.
Creating boot environment <new-zfsBE3>.
Creating file systems on boot environment <new-zfsBE3>.
Creating <zfs> file system for </> in zone <global> on <rpool3/ROOT/
new-zfsBE3>.
Creating <zfs> file system for </var> in zone <global> on <rpool3/ROOT/
new-zfsBE3/var>.
Populating file systems on boot environment <new-zfsBE3>.
Checking selection integrity.
Integrity check OK.
Populating contents of mount point </>.
Populating contents of mount point </var>.
Copying.
Creating shared file system mount points.
Creating compare databases for boot environment <new-zfsBE3>.
Creating compare database for file system </var>.
Creating compare database for file system </rpool3/ROOT>.
Creating compare database for file system </>.
Updating compare databases on boot environment <new-zfsBE3>.
Making boot environment <new-zfsBE3> bootable.
Creating boot_archive for /.alt.tmp.b-O7c.mnt
updating /.alt.tmp.b-O7c.mnt/platform/sun4u/boot_archive
15+0 records in
15+0 records out
Population of boot environment <new-zfsBE3> successful.
Creation of boot environment <new-zfsBE3> successful.

Note, I've used slice 6 because when I use an SMI label, that's the
slice which is the whole disk, not slice 0 or 2.? So it works this
way. I can then activate the BE and move on with life.

So I might have solved this label issue. Odd about the EFI support or
lack of. Why can't LU support EFI?

Tah tah.



From: cindy on
On Dec 16, 1:05 pm, rocker <rocke.robert...(a)pwgsc.gc.ca> wrote:
> On Dec 16, 2:19 pm, cindy <cindy.swearin...(a)sun.com> wrote:
>
>
>
> > On Dec 16, 11:03 am, rocker <rocke.robert...(a)pwgsc.gc.ca> wrote:
>
> > > On Dec 16, 10:59 am, cindy <cindy.swearin...(a)sun.com> wrote:
>
> > > > On Dec 16, 7:48 am, rocker <rocke.robert...(a)pwgsc.gc.ca> wrote:
>
> > > > > Hi
>
> > > > > I have mirrored the two internal disks on a V490 for my rpool. I want
> > > > > to take a SAN LUN which is the same size as the internal disks, add it
> > > > > to the mirror, remove the two internal disks from the mirror
> > > > > effectively breaking the mirror, simply making the SAN LUN as my
> > > > > rpool.
>
> > > > > Will the below process work:
>
> > > > > Again rpool is an existing mirror using 2 internal disks
>
> > > > > add SAN LUN to rpool
>
> > > > > # zpool attach rpool c4t60060E800547670000004767000003D8d0
>
> > > > > 3 way mirror created... so now wait for it to resilver
>
> > > > > remove internal disks
>
> > > > > # zpool detach rpool c1t1d0s0
> > > > > # zpool detach rpool c2t1d0s0
>
> > > > > Setup OBP boot-device boot string. Will rpool be bootable if I do
> > > > > this?
>
> > > > > Thank you
>
> > > > Hi Rocker,
>
> > > > A few things to think about/to do:
>
> > > > 1. I'm not so good with SAN-level stuff, but I thought not all SAN
> > > > disks are bootable
> > > > so be sure that they are.
>
> > > > 2. The SAN disks will need an SMI label and a slice 0. When you attach
> > > > them, you'll
> > > > need to specify the slice identifier. For example, on my 3510 ARRAY,
> > > > and I don't think
> > > > these disks are bootable, the syntax would look like this:
>
> > > > # zpool attach rpool c0t0d0s0 c1t226000C0FFA001ABd7s0
>
> > > > Your syntax above is missing the s0.
>
> > > > 3. Apply boot blocks to the SAN disks after they are attached and have
> > > > resilvered
> > > > successfully.
>
> > > > 4. Confirm that the SAN disks boot successfully before you de-attach
> > > > your internal disks.
>
> > > > See this section for more examples:http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Gu...
>
> > > > Thanks,
>
> > > > Cindy- Hide quoted text -
>
> > > > - Show quoted text -
>
> > > Hi Cindy
>
> > > We've booted of SAN for a while and it works great. HDS & EMC are
> > > fine, IBM pretty much doesn't work with the Sun's or HP's. Lovely ;D
>
> > > Re. slice 0, I thought it was assumed, but I will add that to my
> > > process.
>
> > > Re. SAN, it's funny when I try and make an alternate BE for DR
> > > purposes. I change the disk label to SMI, create the rpool2, and
> > > lucreate. Lucreate fails with the usual:
>
> > > # lucreate -n new-zfsBE-1 -p rpool2
> > > Analyzing system configuration.
> > > ERROR: ZFS pool <rpool1> does not support boot environments
>
> > > I go back and check SAN disk, and the label is set back to EFI? If I
> > > create a BE on a local disk, it works fine once I set the label to
> > > SMI. Why or how on earth does the label change on the HDS SAN LUN when
> > > I do a lucreate? FC drivers?
>
> > > Sorry, this one is off topic but I had to ask / rant.
>
> > > Cheers
>
> > The above lucreate ERROR is pointing to rpool1, not rpool2. Weird.
>
> > My SAN disk experience is nil but I would like to see the syntax that
> > you
> > used to create rpool2 and rpool1. I assuming you have root pools on
> > two
> > different disks. Even if the disk label is SMI, you still a need a
> > slice
> > from which to boot.
>
> > Cindy- Hide quoted text -
>
> > - Show quoted text -
>
> Sorry, I've been using rpool1 and rpool2 interchangeably for a while
> for my testing. They map to either my local disk or my HDS SAN LUN. I
> think I see what I was doing wrong. The old slice issue. I forgot to
> give it a slice.
> So here I create a new rpool3, using the SAN device.
>
> # zpool create rpool3 c1t50060E8005476724d1s6
>
> Create the BE
>
> # lucreate -n new-zfsBE3 -p rpool3
> Analyzing system configuration.
> Comparing source boot environment <s10s_u7wos_08> file systems with
> the
> file system(s) you specified for the new boot environment.
> Determining
> which file systems should be in the new boot environment.
> Updating boot environment description database on all BEs.
> Updating system configuration files.
> The device </dev/dsk/c1t50060E8005476724d1s6> is not a root device for
> any boot environment; cannot get BE ID.
> Creating configuration for boot environment <new-zfsBE3>.
> Source boot environment is <s10s_u7wos_08>.
> Creating boot environment <new-zfsBE3>.
> Creating file systems on boot environment <new-zfsBE3>.
> Creating <zfs> file system for </> in zone <global> on <rpool3/ROOT/
> new-zfsBE3>.
> Creating <zfs> file system for </var> in zone <global> on <rpool3/ROOT/
> new-zfsBE3/var>.
> Populating file systems on boot environment <new-zfsBE3>.
> Checking selection integrity.
> Integrity check OK.
> Populating contents of mount point </>.
> Populating contents of mount point </var>.
> Copying.
> Creating shared file system mount points.
> Creating compare databases for boot environment <new-zfsBE3>.
> Creating compare database for file system </var>.
> Creating compare database for file system </rpool3/ROOT>.
> Creating compare database for file system </>.
> Updating compare databases on boot environment <new-zfsBE3>.
> Making boot environment <new-zfsBE3> bootable.
> Creating boot_archive for /.alt.tmp.b-O7c.mnt
> updating /.alt.tmp.b-O7c.mnt/platform/sun4u/boot_archive
> 15+0 records in
> 15+0 records out
> Population of boot environment <new-zfsBE3> successful.
> Creation of boot environment <new-zfsBE3> successful.
>
> Note, I've used slice 6 because when I use an SMI label, that's the
> slice which is the whole disk, not slice 0 or 2.? So it works this
> way. I can then activate the BE and move on with life.
>
> So I might have solved this label issue. Odd about the EFI support or
> lack of. Why can't LU support EFI?
>
> Tah tah.

I'm not sure what the misunderstanding was but maybe it is that
even EFI-labeled disks have slices (?)

EFI labels are supported when pools are created with whole disks.
The problem is that Solaris systems cannot boot from EFI-labeled
disks so root pool disks must have SMI labels and a slice allocated.
It doesn't matter what slice.

This is a long-standing limitation.

Cindy
From: rocker on
On Dec 16, 3:31 pm, cindy <cindy.swearin...(a)sun.com> wrote:
> On Dec 16, 1:05 pm, rocker <rocke.robert...(a)pwgsc.gc.ca> wrote:
>
>
>
>
>
> > On Dec 16, 2:19 pm, cindy <cindy.swearin...(a)sun.com> wrote:
>
> > > On Dec 16, 11:03 am, rocker <rocke.robert...(a)pwgsc.gc.ca> wrote:
>
> > > > On Dec 16, 10:59 am, cindy <cindy.swearin...(a)sun.com> wrote:
>
> > > > > On Dec 16, 7:48 am, rocker <rocke.robert...(a)pwgsc.gc.ca> wrote:
>
> > > > > > Hi
>
> > > > > > I have mirrored the two internal disks on a V490 for my rpool. I want
> > > > > > to take a SAN LUN which is the same size as the internal disks, add it
> > > > > > to the mirror, remove the two internal disks from the mirror
> > > > > > effectively breaking the mirror, simply making the SAN LUN as my
> > > > > > rpool.
>
> > > > > > Will the below process work:
>
> > > > > > Again rpool is an existing mirror using 2 internal disks
>
> > > > > > add SAN LUN to rpool
>
> > > > > > # zpool attach rpool c4t60060E800547670000004767000003D8d0
>
> > > > > > 3 way mirror created... so now wait for it to resilver
>
> > > > > > remove internal disks
>
> > > > > > # zpool detach rpool c1t1d0s0
> > > > > > # zpool detach rpool c2t1d0s0
>
> > > > > > Setup OBP boot-device boot string. Will rpool be bootable if I do
> > > > > > this?
>
> > > > > > Thank you
>
> > > > > Hi Rocker,
>
> > > > > A few things to think about/to do:
>
> > > > > 1. I'm not so good with SAN-level stuff, but I thought not all SAN
> > > > > disks are bootable
> > > > > so be sure that they are.
>
> > > > > 2. The SAN disks will need an SMI label and a slice 0. When you attach
> > > > > them, you'll
> > > > > need to specify the slice identifier. For example, on my 3510 ARRAY,
> > > > > and I don't think
> > > > > these disks are bootable, the syntax would look like this:
>
> > > > > # zpool attach rpool c0t0d0s0 c1t226000C0FFA001ABd7s0
>
> > > > > Your syntax above is missing the s0.
>
> > > > > 3. Apply boot blocks to the SAN disks after they are attached and have
> > > > > resilvered
> > > > > successfully.
>
> > > > > 4. Confirm that the SAN disks boot successfully before you de-attach
> > > > > your internal disks.
>
> > > > > See this section for more examples:http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Gu...
>
> > > > > Thanks,
>
> > > > > Cindy- Hide quoted text -
>
> > > > > - Show quoted text -
>
> > > > Hi Cindy
>
> > > > We've booted of SAN for a while and it works great. HDS & EMC are
> > > > fine, IBM pretty much doesn't work with the Sun's or HP's. Lovely ;D
>
> > > > Re. slice 0, I thought it was assumed, but I will add that to my
> > > > process.
>
> > > > Re. SAN, it's funny when I try and make an alternate BE for DR
> > > > purposes. I change the disk label to SMI, create the rpool2, and
> > > > lucreate. Lucreate fails with the usual:
>
> > > > # lucreate -n new-zfsBE-1 -p rpool2
> > > > Analyzing system configuration.
> > > > ERROR: ZFS pool <rpool1> does not support boot environments
>
> > > > I go back and check SAN disk, and the label is set back to EFI? If I
> > > > create a BE on a local disk, it works fine once I set the label to
> > > > SMI. Why or how on earth does the label change on the HDS SAN LUN when
> > > > I do a lucreate? FC drivers?
>
> > > > Sorry, this one is off topic but I had to ask / rant.
>
> > > > Cheers
>
> > > The above lucreate ERROR is pointing to rpool1, not rpool2. Weird.
>
> > > My SAN disk experience is nil but I would like to see the syntax that
> > > you
> > > used to create rpool2 and rpool1. I assuming you have root pools on
> > > two
> > > different disks. Even if the disk label is SMI, you still a need a
> > > slice
> > > from which to boot.
>
> > > Cindy- Hide quoted text -
>
> > > - Show quoted text -
>
> > Sorry, I've been using rpool1 and rpool2 interchangeably for a while
> > for my testing. They map to either my local disk or my HDS SAN LUN. I
> > think I see what I was doing wrong. The old slice issue. I forgot to
> > give it a slice.
> > So here I create a new rpool3, using the SAN device.
>
> > # zpool create rpool3 c1t50060E8005476724d1s6
>
> > Create the BE
>
> > # lucreate -n new-zfsBE3 -p rpool3
> > Analyzing system configuration.
> > Comparing source boot environment <s10s_u7wos_08> file systems with
> > the
> > file system(s) you specified for the new boot environment.
> > Determining
> > which file systems should be in the new boot environment.
> > Updating boot environment description database on all BEs.
> > Updating system configuration files.
> > The device </dev/dsk/c1t50060E8005476724d1s6> is not a root device for
> > any boot environment; cannot get BE ID.
> > Creating configuration for boot environment <new-zfsBE3>.
> > Source boot environment is <s10s_u7wos_08>.
> > Creating boot environment <new-zfsBE3>.
> > Creating file systems on boot environment <new-zfsBE3>.
> > Creating <zfs> file system for </> in zone <global> on <rpool3/ROOT/
> > new-zfsBE3>.
> > Creating <zfs> file system for </var> in zone <global> on <rpool3/ROOT/
> > new-zfsBE3/var>.
> > Populating file systems on boot environment <new-zfsBE3>.
> > Checking selection integrity.
> > Integrity check OK.
> > Populating contents of mount point </>.
> > Populating contents of mount point </var>.
> > Copying.
> > Creating shared file system mount points.
> > Creating compare databases for boot environment <new-zfsBE3>.
> > Creating compare database for file system </var>.
> > Creating compare database for file system </rpool3/ROOT>.
> > Creating compare database for file system </>.
> > Updating compare databases on boot environment <new-zfsBE3>.
> > Making boot environment <new-zfsBE3> bootable.
> > Creating boot_archive for /.alt.tmp.b-O7c.mnt
> > updating /.alt.tmp.b-O7c.mnt/platform/sun4u/boot_archive
> > 15+0 records in
> > 15+0 records out
> > Population of boot environment <new-zfsBE3> successful.
> > Creation of boot environment <new-zfsBE3> successful.
>
> > Note, I've used slice 6 because when I use an SMI label, that's the
> > slice which is the whole disk, not slice 0 or 2.? So it works this
> > way. I can then activate the BE and move on with life.
>
> > So I might have solved this label issue. Odd about the EFI support or
> > lack of. Why can't LU support EFI?
>
> > Tah tah.
>
> I'm not sure what the misunderstanding was but maybe it is that
> even EFI-labeled disks have slices (?)
>
> EFI labels are supported when pools are created with whole disks.
> The problem is that Solaris systems cannot boot from EFI-labeled
> disks so root pool disks must have SMI labels and a slice allocated.
> It doesn't matter what slice.
>
> This is a long-standing limitation.
>
> Cindy- Hide quoted text -
>
> - Show quoted text -

Makes sense.

Merçi madame!


From: solx on
On 16/12/2009 20:31, cindy wrote:
> On Dec 16, 1:05 pm, rocker<rocke.robert...(a)pwgsc.gc.ca> wrote:
>> On Dec 16, 2:19 pm, cindy<cindy.swearin...(a)sun.com> wrote:
>>
>>
>>
>>> On Dec 16, 11:03 am, rocker<rocke.robert...(a)pwgsc.gc.ca> wrote:
>>
>>>> On Dec 16, 10:59 am, cindy<cindy.swearin...(a)sun.com> wrote:
>>
>>>>> On Dec 16, 7:48 am, rocker<rocke.robert...(a)pwgsc.gc.ca> wrote:
>>
>>>>>> Hi
>>
>>>>>> I have mirrored the two internal disks on a V490 for my rpool. I want
>>>>>> to take a SAN LUN which is the same size as the internal disks, add it
>>>>>> to the mirror, remove the two internal disks from the mirror
>>>>>> effectively breaking the mirror, simply making the SAN LUN as my
>>>>>> rpool.
>>
>>>>>> Will the below process work:
>>
>>>>>> Again rpool is an existing mirror using 2 internal disks
>>
>>>>>> add SAN LUN to rpool
>>
>>>>>> # zpool attach rpool c4t60060E800547670000004767000003D8d0
>>
>>>>>> 3 way mirror created... so now wait for it to resilver
>>
>>>>>> remove internal disks
>>
>>>>>> # zpool detach rpool c1t1d0s0
>>>>>> # zpool detach rpool c2t1d0s0
>>
>>>>>> Setup OBP boot-device boot string. Will rpool be bootable if I do
>>>>>> this?
>>
>>>>>> Thank you
>>
>>>>> Hi Rocker,
>>
>>>>> A few things to think about/to do:
>>
>>>>> 1. I'm not so good with SAN-level stuff, but I thought not all SAN
>>>>> disks are bootable
>>>>> so be sure that they are.
>>
>>>>> 2. The SAN disks will need an SMI label and a slice 0. When you attach
>>>>> them, you'll
>>>>> need to specify the slice identifier. For example, on my 3510 ARRAY,
>>>>> and I don't think
>>>>> these disks are bootable, the syntax would look like this:
>>
>>>>> # zpool attach rpool c0t0d0s0 c1t226000C0FFA001ABd7s0
>>
>>>>> Your syntax above is missing the s0.
>>
>>>>> 3. Apply boot blocks to the SAN disks after they are attached and have
>>>>> resilvered
>>>>> successfully.
>>
>>>>> 4. Confirm that the SAN disks boot successfully before you de-attach
>>>>> your internal disks.
>>
>>>>> See this section for more examples:http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Gu...
>>
>>>>> Thanks,
>>
>>>>> Cindy- Hide quoted text -
>>
>>>>> - Show quoted text -
>>
>>>> Hi Cindy
>>
>>>> We've booted of SAN for a while and it works great. HDS& EMC are
>>>> fine, IBM pretty much doesn't work with the Sun's or HP's. Lovely ;D
>>
>>>> Re. slice 0, I thought it was assumed, but I will add that to my
>>>> process.
>>
>>>> Re. SAN, it's funny when I try and make an alternate BE for DR
>>>> purposes. I change the disk label to SMI, create the rpool2, and
>>>> lucreate. Lucreate fails with the usual:
>>
>>>> # lucreate -n new-zfsBE-1 -p rpool2
>>>> Analyzing system configuration.
>>>> ERROR: ZFS pool<rpool1> does not support boot environments
>>
>>>> I go back and check SAN disk, and the label is set back to EFI? If I
>>>> create a BE on a local disk, it works fine once I set the label to
>>>> SMI. Why or how on earth does the label change on the HDS SAN LUN when
>>>> I do a lucreate? FC drivers?
>>
>>>> Sorry, this one is off topic but I had to ask / rant.
>>
>>>> Cheers
>>
>>> The above lucreate ERROR is pointing to rpool1, not rpool2. Weird.
>>
>>> My SAN disk experience is nil but I would like to see the syntax that
>>> you
>>> used to create rpool2 and rpool1. I assuming you have root pools on
>>> two
>>> different disks. Even if the disk label is SMI, you still a need a
>>> slice
>>> from which to boot.
>>
>>> Cindy- Hide quoted text -
>>
>>> - Show quoted text -
>>
>> Sorry, I've been using rpool1 and rpool2 interchangeably for a while
>> for my testing. They map to either my local disk or my HDS SAN LUN. I
>> think I see what I was doing wrong. The old slice issue. I forgot to
>> give it a slice.
>> So here I create a new rpool3, using the SAN device.
>>
>> # zpool create rpool3 c1t50060E8005476724d1s6
>>
>> Create the BE
>>
>> # lucreate -n new-zfsBE3 -p rpool3
>> Analyzing system configuration.
>> Comparing source boot environment<s10s_u7wos_08> file systems with
>> the
>> file system(s) you specified for the new boot environment.
>> Determining
>> which file systems should be in the new boot environment.
>> Updating boot environment description database on all BEs.
>> Updating system configuration files.
>> The device</dev/dsk/c1t50060E8005476724d1s6> is not a root device for
>> any boot environment; cannot get BE ID.
>> Creating configuration for boot environment<new-zfsBE3>.
>> Source boot environment is<s10s_u7wos_08>.
>> Creating boot environment<new-zfsBE3>.
>> Creating file systems on boot environment<new-zfsBE3>.
>> Creating<zfs> file system for</> in zone<global> on<rpool3/ROOT/
>> new-zfsBE3>.
>> Creating<zfs> file system for</var> in zone<global> on<rpool3/ROOT/
>> new-zfsBE3/var>.
>> Populating file systems on boot environment<new-zfsBE3>.
>> Checking selection integrity.
>> Integrity check OK.
>> Populating contents of mount point</>.
>> Populating contents of mount point</var>.
>> Copying.
>> Creating shared file system mount points.
>> Creating compare databases for boot environment<new-zfsBE3>.
>> Creating compare database for file system</var>.
>> Creating compare database for file system</rpool3/ROOT>.
>> Creating compare database for file system</>.
>> Updating compare databases on boot environment<new-zfsBE3>.
>> Making boot environment<new-zfsBE3> bootable.
>> Creating boot_archive for /.alt.tmp.b-O7c.mnt
>> updating /.alt.tmp.b-O7c.mnt/platform/sun4u/boot_archive
>> 15+0 records in
>> 15+0 records out
>> Population of boot environment<new-zfsBE3> successful.
>> Creation of boot environment<new-zfsBE3> successful.
>>
>> Note, I've used slice 6 because when I use an SMI label, that's the
>> slice which is the whole disk, not slice 0 or 2.? So it works this
>> way. I can then activate the BE and move on with life.
>>
>> So I might have solved this label issue. Odd about the EFI support or
>> lack of. Why can't LU support EFI?
>>
>> Tah tah.
>
> I'm not sure what the misunderstanding was but maybe it is that
> even EFI-labeled disks have slices (?)
>
> EFI labels are supported when pools are created with whole disks.
> The problem is that Solaris systems cannot boot from EFI-labeled
> disks so root pool disks must have SMI labels and a slice allocated.
> It doesn't matter what slice.
>
> This is a long-standing limitation.
>
> Cindy

Hi Cindy,

Is there an ETA on being able to boot from EFI labelled disks so that
the disk labelling method is irrevelant?


From: cindy on
On Dec 18, 2:06 am, solx <nos...(a)example.net> wrote:
> On 16/12/2009 20:31, cindy wrote:
>
>
>
> > On Dec 16, 1:05 pm, rocker<rocke.robert...(a)pwgsc.gc.ca> wrote:
> >> On Dec 16, 2:19 pm, cindy<cindy.swearin...(a)sun.com> wrote:
>
> >>> On Dec 16, 11:03 am, rocker<rocke.robert...(a)pwgsc.gc.ca> wrote:
>
> >>>> On Dec 16, 10:59 am, cindy<cindy.swearin...(a)sun.com> wrote:
>
> >>>>> On Dec 16, 7:48 am, rocker<rocke.robert...(a)pwgsc.gc.ca> wrote:
>
> >>>>>> Hi
>
> >>>>>> I have mirrored the two internal disks on a V490 for my rpool. I want
> >>>>>> to take a SAN LUN which is the same size as the internal disks, add it
> >>>>>> to the mirror, remove the two internal disks from the mirror
> >>>>>> effectively breaking the mirror, simply making the SAN LUN as my
> >>>>>> rpool.
>
> >>>>>> Will the below process work:
>
> >>>>>> Again rpool is an existing mirror using 2 internal disks
>
> >>>>>> add SAN LUN to rpool
>
> >>>>>> # zpool attach rpool c4t60060E800547670000004767000003D8d0
>
> >>>>>> 3 way mirror created... so now wait for it to resilver
>
> >>>>>> remove internal disks
>
> >>>>>> # zpool detach rpool c1t1d0s0
> >>>>>> # zpool detach rpool c2t1d0s0
>
> >>>>>> Setup OBP boot-device boot string. Will rpool be bootable if I do
> >>>>>> this?
>
> >>>>>> Thank you
>
> >>>>> Hi Rocker,
>
> >>>>> A few things to think about/to do:
>
> >>>>> 1. I'm not so good with SAN-level stuff, but I thought not all SAN
> >>>>> disks are bootable
> >>>>> so be sure that they are.
>
> >>>>> 2. The SAN disks will need an SMI label and a slice 0. When you attach
> >>>>> them, you'll
> >>>>> need to specify the slice identifier. For example, on my 3510 ARRAY,
> >>>>> and I don't think
> >>>>> these disks are bootable, the syntax would look like this:
>
> >>>>> # zpool attach rpool c0t0d0s0 c1t226000C0FFA001ABd7s0
>
> >>>>> Your syntax above is missing the s0.
>
> >>>>> 3. Apply boot blocks to the SAN disks after they are attached and have
> >>>>> resilvered
> >>>>> successfully.
>
> >>>>> 4. Confirm that the SAN disks boot successfully before you de-attach
> >>>>> your internal disks.
>
> >>>>> See this section for more examples:http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Gu...
>
> >>>>> Thanks,
>
> >>>>> Cindy- Hide quoted text -
>
> >>>>> - Show quoted text -
>
> >>>> Hi Cindy
>
> >>>> We've booted of SAN for a while and it works great. HDS& EMC are
> >>>> fine, IBM pretty much doesn't work with the Sun's or HP's. Lovely ;D
>
> >>>> Re. slice 0, I thought it was assumed, but I will add that to my
> >>>> process.
>
> >>>> Re. SAN, it's funny when I try and make an alternate BE for DR
> >>>> purposes. I change the disk label to SMI, create the rpool2, and
> >>>> lucreate. Lucreate fails with the usual:
>
> >>>> # lucreate -n new-zfsBE-1 -p rpool2
> >>>> Analyzing system configuration.
> >>>> ERROR: ZFS pool<rpool1> does not support boot environments
>
> >>>> I go back and check SAN disk, and the label is set back to EFI? If I
> >>>> create a BE on a local disk, it works fine once I set the label to
> >>>> SMI. Why or how on earth does the label change on the HDS SAN LUN when
> >>>> I do a lucreate? FC drivers?
>
> >>>> Sorry, this one is off topic but I had to ask / rant.
>
> >>>> Cheers
>
> >>> The above lucreate ERROR is pointing to rpool1, not rpool2. Weird.
>
> >>> My SAN disk experience is nil but I would like to see the syntax that
> >>> you
> >>> used to create rpool2 and rpool1. I assuming you have root pools on
> >>> two
> >>> different disks. Even if the disk label is SMI, you still a need a
> >>> slice
> >>> from which to boot.
>
> >>> Cindy- Hide quoted text -
>
> >>> - Show quoted text -
>
> >> Sorry, I've been using rpool1 and rpool2 interchangeably for a while
> >> for my testing. They map to either my local disk or my HDS SAN LUN. I
> >> think I see what I was doing wrong. The old slice issue. I forgot to
> >> give it a slice.
> >> So here I create a new rpool3, using the SAN device.
>
> >> # zpool create rpool3 c1t50060E8005476724d1s6
>
> >> Create the BE
>
> >> # lucreate -n new-zfsBE3 -p rpool3
> >> Analyzing system configuration.
> >> Comparing source boot environment<s10s_u7wos_08> file systems with
> >> the
> >> file system(s) you specified for the new boot environment.
> >> Determining
> >> which file systems should be in the new boot environment.
> >> Updating boot environment description database on all BEs.
> >> Updating system configuration files.
> >> The device</dev/dsk/c1t50060E8005476724d1s6> is not a root device for
> >> any boot environment; cannot get BE ID.
> >> Creating configuration for boot environment<new-zfsBE3>.
> >> Source boot environment is<s10s_u7wos_08>.
> >> Creating boot environment<new-zfsBE3>.
> >> Creating file systems on boot environment<new-zfsBE3>.
> >> Creating<zfs> file system for</> in zone<global> on<rpool3/ROOT/
> >> new-zfsBE3>.
> >> Creating<zfs> file system for</var> in zone<global> on<rpool3/ROOT/
> >> new-zfsBE3/var>.
> >> Populating file systems on boot environment<new-zfsBE3>.
> >> Checking selection integrity.
> >> Integrity check OK.
> >> Populating contents of mount point</>.
> >> Populating contents of mount point</var>.
> >> Copying.
> >> Creating shared file system mount points.
> >> Creating compare databases for boot environment<new-zfsBE3>.
> >> Creating compare database for file system</var>.
> >> Creating compare database for file system</rpool3/ROOT>.
> >> Creating compare database for file system</>.
> >> Updating compare databases on boot environment<new-zfsBE3>.
> >> Making boot environment<new-zfsBE3> bootable.
> >> Creating boot_archive for /.alt.tmp.b-O7c.mnt
> >> updating /.alt.tmp.b-O7c.mnt/platform/sun4u/boot_archive
> >> 15+0 records in
> >> 15+0 records out
> >> Population of boot environment<new-zfsBE3> successful.
> >> Creation of boot environment<new-zfsBE3> successful.
>
> >> Note, I've used slice 6 because when I use an SMI label, that's the
> >> slice which is the whole disk, not slice 0 or 2.? So it works this
> >> way. I can then activate the BE and move on with life.
>
> >> So I might have solved this label issue. Odd about the EFI support or
> >> lack of. Why can't LU support EFI?
>
> >> Tah tah.
>
> > I'm not sure what the misunderstanding was but maybe it is that
> > even EFI-labeled disks have slices (?)
>
> > EFI labels are supported when pools are created with whole disks.
> > The problem is that Solaris systems cannot boot from EFI-labeled
> > disks so root pool disks must have SMI labels and a slice allocated.
> > It doesn't matter what slice.
>
> > This is a long-standing limitation.
>
> > Cindy
>
> Hi Cindy,
>
> Is there an ETA on being able to boot from EFI labelled disks so that
> the disk labelling method is irrevelant?

No ETA because it is a difficult compatibility problem.

I keep pushing for it because I know how much easier a sysadmin's
life would be without having to futz with format and slices,
repartitioning,
and relabeling. With all EFI labeled disks, we could say goodbye to
slices forever.

My life trying to explain all that cruft would be much easier too. :-)

Cindy