From: Michael Laajanen on
Hi,

I have a problem with S10 1009 and seting up a mirrored root device.

Initially I tried during install but one devcie failed after boot.

Now I have tried adding but it comlains about EFI label, but I have set
it to SMI label on both drives.

What is the proper way to make c0t0d0s0 mirrored?




pool: rpool
state: DEGRADED
status: One or more devices could not be used because the label is
missing or invalid. Sufficient replicas exist for the pool to
continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
see: http://www.sun.com/msg/ZFS-8000-4J
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
rpool DEGRADED 0 0 0
mirror DEGRADED 0 0 0
c0t0d0s0 FAULTED 0 0 0 corrupted data
c0t1d0s0 ONLINE 0 0 0

errors: No known data errors


/michael
From: hume.spamfilter on
Michael Laajanen <michael_laajanen(a)yahoo.com> wrote:
> What is the proper way to make c0t0d0s0 mirrored?

What is the way you're trying now? You haven't shown the commands
you're using. It should be a simple "zpool replace".

--
Brandon Hume - hume -> BOFH.Ca, http://WWW.BOFH.Ca/
From: Michael Laajanen on
Hi,

hume.spamfilter(a)bofh.ca wrote:
> Michael Laajanen <michael_laajanen(a)yahoo.com> wrote:
>> What is the proper way to make c0t0d0s0 mirrored?
>
> What is the way you're trying now? You haven't shown the commands
> you're using. It should be a simple "zpool replace".
>


/mbash-3.00# zpool clear rpool
bash-3.00# zpool replace -f rpool c0t0d0s0
invalid vdev specification
the following errors must be manually repaired:
/dev/dsk/c0t0d0s0 is part of active ZFS pool rpool. Please see zpool(1M).

bash-3.00# zpool detach rpool c0t0d0s0
bash-3.00# zpool status rpool
pool: rpool
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c0t1d0s0 ONLINE 0 0 0

errors: No known data errors

/michael
From: hume.spamfilter on
Michael Laajanen <michael_laajanen(a)yahoo.com> wrote:
> /dev/dsk/c0t0d0s0 is part of active ZFS pool rpool. Please see zpool(1M).

What does "zdb -l /dev/dsk/c0t0d0s0" say? It looks like the system is
confused about the state of the filesystem on the disk (technically, it's
correct that the device still has a zpool on it...)

The easiest thing to do might be to destroy the zpool on that disk
using dd (null the first and last megabyte) and re-fmthard and re-attach
it, especially now that you've taken it out of the pool anyway.

--
Brandon Hume - hume -> BOFH.Ca, http://WWW.BOFH.Ca/
From: Michael Laajanen on
Hi,

hume.spamfilter(a)bofh.ca wrote:
> Michael Laajanen <michael_laajanen(a)yahoo.com> wrote:
>> /dev/dsk/c0t0d0s0 is part of active ZFS pool rpool. Please see zpool(1M).
>
> What does "zdb -l /dev/dsk/c0t0d0s0" say? It looks like the system is
> confused about the state of the filesystem on the disk (technically, it's
> correct that the device still has a zpool on it...)
>
> The easiest thing to do might be to destroy the zpool on that disk
> using dd (null the first and last megabyte) and re-fmthard and re-attach
> it, especially now that you've taken it out of the pool anyway.
>

bash-3.00# zdb -l /dev/dsk/c0t0d0s0
--------------------------------------------
LABEL 0
--------------------------------------------
version=15
name='rpool'
state=0
txg=4
pool_guid=2255544692981663971
hostid=2159107026
hostname='tango'
top_guid=5810209564273531140
guid=2702942121932374796
vdev_tree
type='mirror'
id=0
guid=5810209564273531140
metaslab_array=24
metaslab_shift=29
ashift=9
asize=73341861888
is_log=0
children[0]
type='disk'
id=0
guid=2702942121932374796
path='/dev/dsk/c0t0d0s0'
devid='id1,sd(a)n5000000000000000/a'
phys_path='/pci(a)1f,4000/scsi@3/sd@0,0:a'
whole_disk=0
children[1]
type='disk'
id=1
guid=14581663705942102880
path='/dev/dsk/c0t1d0s0'
devid='id1,sd(a)n5000000000000000/a'
phys_path='/pci(a)1f,4000/scsi@3/sd@1,0:a'
whole_disk=0
--------------------------------------------
LABEL 1
--------------------------------------------
version=15
name='rpool'
state=0
txg=4
pool_guid=2255544692981663971
hostid=2159107026
hostname='tango'
top_guid=5810209564273531140
guid=2702942121932374796
vdev_tree
type='mirror'
id=0
guid=5810209564273531140
metaslab_array=24
metaslab_shift=29
ashift=9
asize=73341861888
is_log=0
children[0]
type='disk'
id=0
guid=2702942121932374796
path='/dev/dsk/c0t0d0s0'
devid='id1,sd(a)n5000000000000000/a'
phys_path='/pci(a)1f,4000/scsi@3/sd@0,0:a'
whole_disk=0
children[1]
type='disk'
id=1
guid=14581663705942102880
path='/dev/dsk/c0t1d0s0'
devid='id1,sd(a)n5000000000000000/a'
phys_path='/pci(a)1f,4000/scsi@3/sd@1,0:a'
whole_disk=0
--------------------------------------------
LABEL 2
--------------------------------------------
version=15
name='rpool'
state=0
txg=4
pool_guid=2255544692981663971
hostid=2159107026
hostname='tango'
top_guid=5810209564273531140
guid=2702942121932374796
vdev_tree
type='mirror'
id=0
guid=5810209564273531140
metaslab_array=24
metaslab_shift=29
ashift=9
asize=73341861888
is_log=0
children[0]
type='disk'
id=0
guid=2702942121932374796
path='/dev/dsk/c0t0d0s0'
devid='id1,sd(a)n5000000000000000/a'
phys_path='/pci(a)1f,4000/scsi@3/sd@0,0:a'
whole_disk=0
children[1]
type='disk'
id=1
guid=14581663705942102880
path='/dev/dsk/c0t1d0s0'
devid='id1,sd(a)n5000000000000000/a'
phys_path='/pci(a)1f,4000/scsi@3/sd@1,0:a'
whole_disk=0
--------------------------------------------
LABEL 3
--------------------------------------------
version=15
name='rpool'
state=0
txg=4
pool_guid=2255544692981663971
hostid=2159107026
hostname='tango'
top_guid=5810209564273531140
guid=2702942121932374796
vdev_tree
type='mirror'
id=0
guid=5810209564273531140
metaslab_array=24
metaslab_shift=29
ashift=9
asize=73341861888
is_log=0
children[0]
type='disk'
id=0
guid=2702942121932374796
path='/dev/dsk/c0t0d0s0'
devid='id1,sd(a)n5000000000000000/a'
phys_path='/pci(a)1f,4000/scsi@3/sd@0,0:a'
whole_disk=0
children[1]
type='disk'
id=1
guid=14581663705942102880
path='/dev/dsk/c0t1d0s0'
devid='id1,sd(a)n5000000000000000/a'
phys_path='/pci(a)1f,4000/scsi@3/sd@1,0:a'
whole_disk=0
/michael