From: cindy on
On Apr 7, 10:15 am, Michael Laajanen <michael_laaja...(a)yahoo.com>
wrote:
> Hi,
>
> hume.spamfil...(a)bofh.ca wrote:
> > Michael Laajanen <michael_laaja...(a)yahoo.com> wrote:
> >> /dev/dsk/c0t0d0s0 is part of active ZFS pool rpool. Please see zpool(1M).
>
> > What does "zdb -l /dev/dsk/c0t0d0s0" say?  It looks like the system is
> > confused about the state of the filesystem on the disk (technically, it's
> > correct that the device still has a zpool on it...)
>
> > The easiest thing to do might be to destroy the zpool on that disk
> > using dd (null the first and last megabyte) and re-fmthard and re-attach
> > it, especially now that you've taken it out of the pool anyway.
>
> bash-3.00# zdb -l /dev/dsk/c0t0d0s0
> --------------------------------------------
> LABEL 0
> --------------------------------------------
>      version=15
>      name='rpool'
>      state=0
>      txg=4
>      pool_guid=2255544692981663971
>      hostid=2159107026
>      hostname='tango'
>      top_guid=5810209564273531140
>      guid=2702942121932374796
>      vdev_tree
>          type='mirror'
>          id=0
>          guid=5810209564273531140
>          metaslab_array=24
>          metaslab_shift=29
>          ashift=9
>          asize=73341861888
>          is_log=0
>          children[0]
>                  type='disk'
>                  id=0
>                  guid=2702942121932374796
>                  path='/dev/dsk/c0t0d0s0'
>                  devid='id1,sd(a)n5000000000000000/a'
>                  phys_path='/pci(a)1f,4000/scsi@3/sd@0,0:a'
>                  whole_disk=0
>          children[1]
>                  type='disk'
>                  id=1
>                  guid=14581663705942102880
>                  path='/dev/dsk/c0t1d0s0'
>                  devid='id1,sd(a)n5000000000000000/a'
>                  phys_path='/pci(a)1f,4000/scsi@3/sd@1,0:a'
>                  whole_disk=0
> --------------------------------------------
> LABEL 1
> --------------------------------------------
>      version=15
>      name='rpool'
>      state=0
>      txg=4
>      pool_guid=2255544692981663971
>      hostid=2159107026
>      hostname='tango'
>      top_guid=5810209564273531140
>      guid=2702942121932374796
>      vdev_tree
>          type='mirror'
>          id=0
>          guid=5810209564273531140
>          metaslab_array=24
>          metaslab_shift=29
>          ashift=9
>          asize=73341861888
>          is_log=0
>          children[0]
>                  type='disk'
>                  id=0
>                  guid=2702942121932374796
>                  path='/dev/dsk/c0t0d0s0'
>                  devid='id1,sd(a)n5000000000000000/a'
>                  phys_path='/pci(a)1f,4000/scsi@3/sd@0,0:a'
>                  whole_disk=0
>          children[1]
>                  type='disk'
>                  id=1
>                  guid=14581663705942102880
>                  path='/dev/dsk/c0t1d0s0'
>                  devid='id1,sd(a)n5000000000000000/a'
>                  phys_path='/pci(a)1f,4000/scsi@3/sd@1,0:a'
>                  whole_disk=0
> --------------------------------------------
> LABEL 2
> --------------------------------------------
>      version=15
>      name='rpool'
>      state=0
>      txg=4
>      pool_guid=2255544692981663971
>      hostid=2159107026
>      hostname='tango'
>      top_guid=5810209564273531140
>      guid=2702942121932374796
>      vdev_tree
>          type='mirror'
>          id=0
>          guid=5810209564273531140
>          metaslab_array=24
>          metaslab_shift=29
>          ashift=9
>          asize=73341861888
>          is_log=0
>          children[0]
>                  type='disk'
>                  id=0
>                  guid=2702942121932374796
>                  path='/dev/dsk/c0t0d0s0'
>                  devid='id1,sd(a)n5000000000000000/a'
>                  phys_path='/pci(a)1f,4000/scsi@3/sd@0,0:a'
>                  whole_disk=0
>          children[1]
>                  type='disk'
>                  id=1
>                  guid=14581663705942102880
>                  path='/dev/dsk/c0t1d0s0'
>                  devid='id1,sd(a)n5000000000000000/a'
>                  phys_path='/pci(a)1f,4000/scsi@3/sd@1,0:a'
>                  whole_disk=0
> --------------------------------------------
> LABEL 3
> --------------------------------------------
>      version=15
>      name='rpool'
>      state=0
>      txg=4
>      pool_guid=2255544692981663971
>      hostid=2159107026
>      hostname='tango'
>      top_guid=5810209564273531140
>      guid=2702942121932374796
>      vdev_tree
>          type='mirror'
>          id=0
>          guid=5810209564273531140
>          metaslab_array=24
>          metaslab_shift=29
>          ashift=9
>          asize=73341861888
>          is_log=0
>          children[0]
>                  type='disk'
>                  id=0
>                  guid=2702942121932374796
>                  path='/dev/dsk/c0t0d0s0'
>                  devid='id1,sd(a)n5000000000000000/a'
>                  phys_path='/pci(a)1f,4000/scsi@3/sd@0,0:a'
>                  whole_disk=0
>          children[1]
>                  type='disk'
>                  id=1
>                  guid=14581663705942102880
>                  path='/dev/dsk/c0t1d0s0'
>                  devid='id1,sd(a)n5000000000000000/a'
>                  phys_path='/pci(a)1f,4000/scsi@3/sd@1,0:a'
>                  whole_disk=0
> /michael

Hi Michael,

The above output looks sane to me so it seems ZFS has an accurate view
of these disks.

I don't know why disk labels go funky, but it happens.

You might overwrite the c0t0d0s0 label with an EFI label, then relabel
it back to SMI and confirm the slices are correct. Sometimes wiping
out
the label and recreating it helps.

Then, try re-attaching c0t0d0s0, like this:

# zpool attach rpool c0t1d0s0 c0t0d0s0

If this works, don't forget to apply the bootblocks after the c0t0d0s0
disk
is completely resilvered.

Thanks,

cindy
From: Michael Laajanen on
Hi,

cindy wrote:
> On Apr 7, 10:15 am, Michael Laajanen <michael_laaja...(a)yahoo.com>
> wrote:
>> Hi,
>>
><snip>

>> /michael
>
> Hi Michael,
>
> The above output looks sane to me so it seems ZFS has an accurate view
> of these disks.
>
> I don't know why disk labels go funky, but it happens.
>
> You might overwrite the c0t0d0s0 label with an EFI label, then relabel
> it back to SMI and confirm the slices are correct. Sometimes wiping
> out
> the label and recreating it helps.
>
> Then, try re-attaching c0t0d0s0, like this:
>
> # zpool attach rpool c0t1d0s0 c0t0d0s0
>
> If this works, don't forget to apply the bootblocks after the c0t0d0s0
> disk
> is completely resilvered.
>
> Thanks,
>
> cindy
Hi again, manage to crash the system disk so I had to jumpstart the
server, now I am back with the same issue butt this time c0t0d0s0 is
working boot drive.

bash-3.00# prtvtoc /dev/rdsk/c0t0d0s0
* /dev/rdsk/c0t0d0s0 partition map
*
* Dimensions:
* 512 bytes/sector
* 1093 sectors/track
* 2 tracks/cylinder
* 2186 sectors/cylinder
* 65535 cylinders
* 65533 accessible cylinders
*
* Flags:
* 1: unmountable
* 10: read-only
*
* First Sector Last
* Partition Tag Flags Sector Count Sector Mount Directory
0 2 00 0 143255138 143255137
2 5 00 0 143255138 143255137
bash-3.00# prtvtoc /dev/rdsk/c0t1d0s0
* /dev/rdsk/c0t1d0s0 partition map
*
* Dimensions:
* 512 bytes/sector
* 1093 sectors/track
* 2 tracks/cylinder
* 2186 sectors/cylinder
* 65535 cylinders
* 65533 accessible cylinders
*
* Flags:
* 1: unmountable
* 10: read-only
*
* First Sector Last
* Partition Tag Flags Sector Count Sector Mount Directory
0 2 00 0 143255138 143255137
2 5 00 0 143255138 143255137


Below is after c0t1 detached again!

bash-3.00# zpool status rpool
pool: rpool
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c0t0d0s0 ONLINE 0 0 0


I have tried label on c0t1d0s0,from SMI -> EFI -> SMI, still the same issue.
bash-3.00# zpool attach rpool c0t0d0s0 c0t1d0s0
cannot attach c0t1d0s0 to c0t0d0s0: c0t1d0s0 is busy

bash-3.00# zpool create foo c0t1d0s0
bash-3.00# zpool status foo
pool: foo
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
foo ONLINE 0 0 0
c0t1d0s0 ONLINE 0 0 0

errors: No known data errors

bash-3.00# zpool attach rpool c0t0d0s0 c0t1d0s0
cannot attach c0t1d0s0 to c0t0d0s0: c0t1d0s0 is busy


What is the problem, the drive is working as I can create a pool foo!

/michael
From: cindy on
On Apr 7, 1:14 pm, Michael Laajanen <michael_laaja...(a)yahoo.com>
wrote:
> Hi,
>
>
>
> cindy wrote:
> > On Apr 7, 10:15 am, Michael Laajanen <michael_laaja...(a)yahoo.com>
> > wrote:
> >> Hi,
>
> ><snip>
> >> /michael
>
> > Hi Michael,
>
> > The above output looks sane to me so it seems ZFS has an accurate view
> > of these disks.
>
> > I don't know why disk labels go funky, but it happens.
>
> > You might overwrite the c0t0d0s0 label with an EFI label, then relabel
> > it back to SMI and confirm the slices are correct. Sometimes wiping
> > out
> > the label and recreating it helps.
>
> > Then, try re-attaching c0t0d0s0, like this:
>
> > # zpool attach rpool c0t1d0s0 c0t0d0s0
>
> > If this works, don't forget to apply the bootblocks after the c0t0d0s0
> > disk
> > is completely resilvered.
>
> > Thanks,
>
> > cindy
>
> Hi again, manage to crash the system disk so I had to jumpstart the
> server, now I am back with the same issue butt this time c0t0d0s0 is
> working boot drive.
>
> bash-3.00# prtvtoc /dev/rdsk/c0t0d0s0
> * /dev/rdsk/c0t0d0s0 partition map
> *
> * Dimensions:
> *     512 bytes/sector
> *    1093 sectors/track
> *       2 tracks/cylinder
> *    2186 sectors/cylinder
> *   65535 cylinders
> *   65533 accessible cylinders
> *
> * Flags:
> *   1: unmountable
> *  10: read-only
> *
> *                          First     Sector    Last
> * Partition  Tag  Flags    Sector     Count    Sector  Mount Directory
>         0      2    00          0 143255138 143255137
>         2      5    00          0 143255138 143255137
> bash-3.00# prtvtoc /dev/rdsk/c0t1d0s0
> * /dev/rdsk/c0t1d0s0 partition map
> *
> * Dimensions:
> *     512 bytes/sector
> *    1093 sectors/track
> *       2 tracks/cylinder
> *    2186 sectors/cylinder
> *   65535 cylinders
> *   65533 accessible cylinders
> *
> * Flags:
> *   1: unmountable
> *  10: read-only
> *
> *                          First     Sector    Last
> * Partition  Tag  Flags    Sector     Count    Sector  Mount Directory
>         0      2    00          0 143255138 143255137
>         2      5    00          0 143255138 143255137
>
> Below is after c0t1 detached again!
>
> bash-3.00# zpool status rpool
>    pool: rpool
>   state: ONLINE
>   scrub: none requested
> config:
>
>          NAME        STATE     READ WRITE CKSUM
>          rpool       ONLINE       0     0     0
>            c0t0d0s0  ONLINE       0     0     0
>
> I have tried label on c0t1d0s0,from SMI -> EFI -> SMI, still the same issue.
> bash-3.00# zpool attach rpool c0t0d0s0 c0t1d0s0
> cannot attach c0t1d0s0 to c0t0d0s0: c0t1d0s0 is busy
>
> bash-3.00# zpool create foo c0t1d0s0
> bash-3.00# zpool status foo
>    pool: foo
>   state: ONLINE
>   scrub: none requested
> config:
>
>          NAME        STATE     READ WRITE CKSUM
>          foo         ONLINE       0     0     0
>            c0t1d0s0  ONLINE       0     0     0
>
> errors: No known data errors
>
> bash-3.00# zpool attach rpool c0t0d0s0 c0t1d0s0
> cannot attach c0t1d0s0 to c0t0d0s0: c0t1d0s0 is busy
>
> What is the problem, the drive is working as I can create a pool foo!
>
> /michael

Michael,

In the last case, of course, you need to destroy foo before
attaching c0t1d0s0, but other than that, I'm stumped.

If foo is destroyed, you might retry the zpool attach with
the -f option to see if any add'l clues are provided:

# zpool attach -f rpool c0t0d0s0 c0t1d0s0

We had a bug where if you tried to attach a disk to create
a root pool mirror you would get a overlapping slice message,
but I don't recall a device busy problem with zpool attach.

Cindy
Thanks,

Cindy
From: Michael Laajanen on
Hi,

cindy wrote:
> On Apr 7, 1:14 pm, Michael Laajanen <michael_laaja...(a)yahoo.com>
> wrote:
>> Hi,
>>
>>
>>
>> cindy wrote:
>>> On Apr 7, 10:15 am, Michael Laajanen <michael_laaja...(a)yahoo.com>
>>> wrote:
>>>> Hi,
>>> <snip>
>>>> /michael
>>> Hi Michael,
>>> The above output looks sane to me so it seems ZFS has an accurate view
>>> of these disks.
>>> I don't know why disk labels go funky, but it happens.
>>> You might overwrite the c0t0d0s0 label with an EFI label, then relabel
>>> it back to SMI and confirm the slices are correct. Sometimes wiping
>>> out
>>> the label and recreating it helps.
>>> Then, try re-attaching c0t0d0s0, like this:
>>> # zpool attach rpool c0t1d0s0 c0t0d0s0
>>> If this works, don't forget to apply the bootblocks after the c0t0d0s0
>>> disk
>>> is completely resilvered.
>>> Thanks,
>>> cindy
>> Hi again, manage to crash the system disk so I had to jumpstart the
>> server, now I am back with the same issue butt this time c0t0d0s0 is
>> working boot drive.
>>
>> bash-3.00# prtvtoc /dev/rdsk/c0t0d0s0
>> * /dev/rdsk/c0t0d0s0 partition map
>> *
>> * Dimensions:
>> * 512 bytes/sector
>> * 1093 sectors/track
>> * 2 tracks/cylinder
>> * 2186 sectors/cylinder
>> * 65535 cylinders
>> * 65533 accessible cylinders
>> *
>> * Flags:
>> * 1: unmountable
>> * 10: read-only
>> *
>> * First Sector Last
>> * Partition Tag Flags Sector Count Sector Mount Directory
>> 0 2 00 0 143255138 143255137
>> 2 5 00 0 143255138 143255137
>> bash-3.00# prtvtoc /dev/rdsk/c0t1d0s0
>> * /dev/rdsk/c0t1d0s0 partition map
>> *
>> * Dimensions:
>> * 512 bytes/sector
>> * 1093 sectors/track
>> * 2 tracks/cylinder
>> * 2186 sectors/cylinder
>> * 65535 cylinders
>> * 65533 accessible cylinders
>> *
>> * Flags:
>> * 1: unmountable
>> * 10: read-only
>> *
>> * First Sector Last
>> * Partition Tag Flags Sector Count Sector Mount Directory
>> 0 2 00 0 143255138 143255137
>> 2 5 00 0 143255138 143255137
>>
>> Below is after c0t1 detached again!
>>
>> bash-3.00# zpool status rpool
>> pool: rpool
>> state: ONLINE
>> scrub: none requested
>> config:
>>
>> NAME STATE READ WRITE CKSUM
>> rpool ONLINE 0 0 0
>> c0t0d0s0 ONLINE 0 0 0
>>
>> I have tried label on c0t1d0s0,from SMI -> EFI -> SMI, still the same issue.
>> bash-3.00# zpool attach rpool c0t0d0s0 c0t1d0s0
>> cannot attach c0t1d0s0 to c0t0d0s0: c0t1d0s0 is busy
>>
>> bash-3.00# zpool create foo c0t1d0s0
>> bash-3.00# zpool status foo
>> pool: foo
>> state: ONLINE
>> scrub: none requested
>> config:
>>
>> NAME STATE READ WRITE CKSUM
>> foo ONLINE 0 0 0
>> c0t1d0s0 ONLINE 0 0 0
>>
>> errors: No known data errors
>>
>> bash-3.00# zpool attach rpool c0t0d0s0 c0t1d0s0
>> cannot attach c0t1d0s0 to c0t0d0s0: c0t1d0s0 is busy
>>
>> What is the problem, the drive is working as I can create a pool foo!
>>
>> /michael
>
> Michael,
>
> In the last case, of course, you need to destroy foo before
> attaching c0t1d0s0, but other than that, I'm stumped.
>
> If foo is destroyed, you might retry the zpool attach with
> the -f option to see if any add'l clues are provided:
>
> # zpool attach -f rpool c0t0d0s0 c0t1d0s0
>
> We had a bug where if you tried to attach a disk to create
> a root pool mirror you would get a overlapping slice message,
> but I don't recall a device busy problem with zpool attach.
>
> Cindy
> Thanks,
>
> Cindy
Right, I am also stumped!

bash-3.00# zpool attach -f rpool c0t0d0s0 c0t1d0s0
cannot attach c0t1d0s0 to c0t0d0s0: c0t1d0s0 is busy
bash-3.00# zpool create foo c0t1d0s0
bash-3.00# zpool status foo
pool: foo
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
foo ONLINE 0 0 0
c0t1d0s0 ONLINE 0 0 0

errors: No known data errors
bash-3.00# zpool destroy foo
bash-3.00# zpool status foo
cannot open 'foo': no such pool
bash-3.00# zpool attach -f rpool c0t0d0s0 c0t1d0s0
cannot attach c0t1d0s0 to c0t0d0s0: c0t1d0s0 is busy
bash-3.00# cat /etc/release
Solaris 10 10/09 s10s_u8wos_08a SPARC
Copyright 2009 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 16 September 2009



/michael
From: cindy on
On Apr 7, 1:54 pm, Michael Laajanen <michael_laaja...(a)yahoo.com>
wrote:
> Hi,
>
>
>
> cindy wrote:
> > On Apr 7, 1:14 pm, Michael Laajanen <michael_laaja...(a)yahoo.com>
> > wrote:
> >> Hi,
>
> >> cindy wrote:
> >>> On Apr 7, 10:15 am, Michael Laajanen <michael_laaja...(a)yahoo.com>
> >>> wrote:
> >>>> Hi,
> >>> <snip>
> >>>> /michael
> >>> Hi Michael,
> >>> The above output looks sane to me so it seems ZFS has an accurate view
> >>> of these disks.
> >>> I don't know why disk labels go funky, but it happens.
> >>> You might overwrite the c0t0d0s0 label with an EFI label, then relabel
> >>> it back to SMI and confirm the slices are correct. Sometimes wiping
> >>> out
> >>> the label and recreating it helps.
> >>> Then, try re-attaching c0t0d0s0, like this:
> >>> # zpool attach rpool c0t1d0s0 c0t0d0s0
> >>> If this works, don't forget to apply the bootblocks after the c0t0d0s0
> >>> disk
> >>> is completely resilvered.
> >>> Thanks,
> >>> cindy
> >> Hi again, manage to crash the system disk so I had to jumpstart the
> >> server, now I am back with the same issue butt this time c0t0d0s0 is
> >> working boot drive.
>
> >> bash-3.00# prtvtoc /dev/rdsk/c0t0d0s0
> >> * /dev/rdsk/c0t0d0s0 partition map
> >> *
> >> * Dimensions:
> >> *     512 bytes/sector
> >> *    1093 sectors/track
> >> *       2 tracks/cylinder
> >> *    2186 sectors/cylinder
> >> *   65535 cylinders
> >> *   65533 accessible cylinders
> >> *
> >> * Flags:
> >> *   1: unmountable
> >> *  10: read-only
> >> *
> >> *                          First     Sector    Last
> >> * Partition  Tag  Flags    Sector     Count    Sector  Mount Directory
> >>         0      2    00          0 143255138 143255137
> >>         2      5    00          0 143255138 143255137
> >> bash-3.00# prtvtoc /dev/rdsk/c0t1d0s0
> >> * /dev/rdsk/c0t1d0s0 partition map
> >> *
> >> * Dimensions:
> >> *     512 bytes/sector
> >> *    1093 sectors/track
> >> *       2 tracks/cylinder
> >> *    2186 sectors/cylinder
> >> *   65535 cylinders
> >> *   65533 accessible cylinders
> >> *
> >> * Flags:
> >> *   1: unmountable
> >> *  10: read-only
> >> *
> >> *                          First     Sector    Last
> >> * Partition  Tag  Flags    Sector     Count    Sector  Mount Directory
> >>         0      2    00          0 143255138 143255137
> >>         2      5    00          0 143255138 143255137
>
> >> Below is after c0t1 detached again!
>
> >> bash-3.00# zpool status rpool
> >>    pool: rpool
> >>   state: ONLINE
> >>   scrub: none requested
> >> config:
>
> >>          NAME        STATE     READ WRITE CKSUM
> >>          rpool       ONLINE       0     0     0
> >>            c0t0d0s0  ONLINE       0     0     0
>
> >> I have tried label on c0t1d0s0,from SMI -> EFI -> SMI, still the same issue.
> >> bash-3.00# zpool attach rpool c0t0d0s0 c0t1d0s0
> >> cannot attach c0t1d0s0 to c0t0d0s0: c0t1d0s0 is busy
>
> >> bash-3.00# zpool create foo c0t1d0s0
> >> bash-3.00# zpool status foo
> >>    pool: foo
> >>   state: ONLINE
> >>   scrub: none requested
> >> config:
>
> >>          NAME        STATE     READ WRITE CKSUM
> >>          foo         ONLINE       0     0     0
> >>            c0t1d0s0  ONLINE       0     0     0
>
> >> errors: No known data errors
>
> >> bash-3.00# zpool attach rpool c0t0d0s0 c0t1d0s0
> >> cannot attach c0t1d0s0 to c0t0d0s0: c0t1d0s0 is busy
>
> >> What is the problem, the drive is working as I can create a pool foo!
>
> >> /michael
>
> > Michael,
>
> > In the last case, of course, you need to destroy foo before
> > attaching c0t1d0s0, but other than that, I'm stumped.
>
> > If foo is destroyed, you might retry the zpool attach with
> > the -f option to see if any add'l clues are provided:
>
> > # zpool attach -f rpool c0t0d0s0 c0t1d0s0
>
> > We had a bug where if you tried to attach a disk to create
> > a root pool mirror you would get a overlapping slice message,
> > but I don't recall a device busy problem with zpool attach.
>
> > Cindy
> > Thanks,
>
> > Cindy
>
> Right, I am also stumped!
>
> bash-3.00# zpool attach -f rpool c0t0d0s0 c0t1d0s0
> cannot attach c0t1d0s0 to c0t0d0s0: c0t1d0s0 is busy
> bash-3.00# zpool create foo c0t1d0s0
> bash-3.00# zpool status foo
>    pool: foo
>   state: ONLINE
>   scrub: none requested
> config:
>
>          NAME        STATE     READ WRITE CKSUM
>          foo         ONLINE       0     0     0
>            c0t1d0s0  ONLINE       0     0     0
>
> errors: No known data errors
> bash-3.00# zpool destroy foo
> bash-3.00# zpool status foo
> cannot open 'foo': no such pool
> bash-3.00# zpool attach -f rpool c0t0d0s0 c0t1d0s0
> cannot attach c0t1d0s0 to c0t0d0s0: c0t1d0s0 is busy
> bash-3.00# cat /etc/release
>                        Solaris 10 10/09 s10s_u8wos_08a SPARC
>             Copyright 2009 Sun Microsystems, Inc.  All Rights Reserved.
>                          Use is subject to license terms.
>                             Assembled 16 September 2009
>
> /michael

Michael,

I did a google search on zpool attach, rpool, cannot attach, is busy
and found two errors conditions:
1. the disks were not specified in the correct order, but this is not
your problem
2. this problem was reported with an Areca controller and I didn't see
a resolution reported.

Works fine on my t2000 running identical bits as you:
# cat /etc/release
Solaris 10 10/09 s10s_u8wos_08a SPARC
Copyright 2009 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 16 September 2009
# zpool attach rpool c1t0d0s0 c1t1d0s0
Please be sure to invoke installboot(1M) to make 'c1t1d0s0' bootable.
# zpool status rpool
pool: rpool
state: ONLINE
status: One or more devices is currently being resilvered. The pool
will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h0m, 3.41% done, 0h13m to go
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror ONLINE 0 0 0
c1t0d0s0 ONLINE 0 0 0
c1t1d0s0 ONLINE 0 0 0 1.27G resilvered

errors: No known data errors