From: Michael Laajanen on
Hi,

cindy wrote:
> On Apr 7, 1:54 pm, Michael Laajanen <michael_laaja...(a)yahoo.com>
> wrote:
>> Hi,
>>
>>
>>
>> cindy wrote:
>>> On Apr 7, 1:14 pm, Michael Laajanen <michael_laaja...(a)yahoo.com>
>>> wrote:
>>>> Hi,
>>> Thanks,
>>> Cindy
>> Right, I am also stumped!
>>
>> bash-3.00# zpool attach -f rpool c0t0d0s0 c0t1d0s0
>> cannot attach c0t1d0s0 to c0t0d0s0: c0t1d0s0 is busy
>> bash-3.00# zpool create foo c0t1d0s0
>> bash-3.00# zpool status foo
>> pool: foo
>> state: ONLINE
>> scrub: none requested
>> config:
>>
>> NAME STATE READ WRITE CKSUM
>> foo ONLINE 0 0 0
>> c0t1d0s0 ONLINE 0 0 0
>>
>> errors: No known data errors
>> bash-3.00# zpool destroy foo
>> bash-3.00# zpool status foo
>> cannot open 'foo': no such pool
>> bash-3.00# zpool attach -f rpool c0t0d0s0 c0t1d0s0
>> cannot attach c0t1d0s0 to c0t0d0s0: c0t1d0s0 is busy
>> bash-3.00# cat /etc/release
>> Solaris 10 10/09 s10s_u8wos_08a SPARC
>> Copyright 2009 Sun Microsystems, Inc. All Rights Reserved.
>> Use is subject to license terms.
>> Assembled 16 September 2009
>>
>> /michael
>
> Michael,
>
> I did a google search on zpool attach, rpool, cannot attach, is busy
> and found two errors conditions:
> 1. the disks were not specified in the correct order, but this is not
> your problem
right

> 2. this problem was reported with an Areca controller and I didn't see
> a resolution reported.
>
This is a E450, all Sun gear but the disks are new 15K 73GB Seagates(
SEAGATE-ST373455LC-0003) for the root pool.
I have 16 300GB drives in the main pool all working fine!

> Works fine on my t2000 running identical bits as you:
> # cat /etc/release
> Solaris 10 10/09 s10s_u8wos_08a SPARC
> Copyright 2009 Sun Microsystems, Inc. All Rights Reserved.
> Use is subject to license terms.
> Assembled 16 September 2009
> # zpool attach rpool c1t0d0s0 c1t1d0s0
> Please be sure to invoke installboot(1M) to make 'c1t1d0s0' bootable.
> # zpool status rpool
> pool: rpool
> state: ONLINE
> status: One or more devices is currently being resilvered. The pool
> will
> continue to function, possibly in a degraded state.
> action: Wait for the resilver to complete.
> scrub: resilver in progress for 0h0m, 3.41% done, 0h13m to go
> config:
>
> NAME STATE READ WRITE CKSUM
> rpool ONLINE 0 0 0
> mirror ONLINE 0 0 0
> c1t0d0s0 ONLINE 0 0 0
> c1t1d0s0 ONLINE 0 0 0 1.27G resilvered
>
> errors: No known data errors
>
The funny thing is that the installer also fail!

0. c0t0d0 <SEAGATE-ST373455LC-0003 cyl 65533 alt 2 hd 2 sec 1093>
/pci(a)1f,4000/scsi@3/sd@0,0
1. c0t1d0 <SEAGATE-ST373455LC-0003 cyl 65533 alt 2 hd 2 sec 1093>
/pci(a)1f,4000/scsi@3/sd@1,0
2. c2t0d0 <FUJITSU-MAW3300NC-0104-279.40GB>
/pci@6,4000/scsi@3/sd@0,0
3. c2t1d0 <FUJITSU-MAW3300NC-0104-279.40GB>
/pci@6,4000/scsi@3/sd@1,0
4. c2t2d0 <FUJITSU-MAW3300NC-0104-279.40GB>
/pci@6,4000/scsi@3/sd@2,0
5. c2t3d0 <FUJITSU-MAW3300NC-0104-279.40GB>
/pci@6,4000/scsi@3/sd@3,0
6. c3t0d0 <FUJITSU-MAW3300NC-0104-279.40GB>
/pci@6,4000/scsi@3,1/sd@0,0
7. c3t1d0 <FUJITSU-MAW3300NC-0104-279.40GB>
/pci@6,4000/scsi@3,1/sd@1,0
8. c3t2d0 <FUJITSU-MAW3300NC-0104-279.40GB>
/pci@6,4000/scsi@3,1/sd@2,0
9. c3t3d0 <FUJITSU-MAW3300NC-0104-279.40GB>
/pci@6,4000/scsi@3,1/sd@3,0
10. c4t0d0 <HITACHI-HUS103030FL3800-SA1B-279.40GB>
/pci@6,4000/scsi@4/sd@0,0
11. c4t1d0 <FUJITSU-MAW3300NC-0104-279.40GB>
/pci@6,4000/scsi@4/sd@1,0
12. c4t2d0 <MAXTOR-ATLAS10K5_300SCA-JNZH-279.40GB>
/pci@6,4000/scsi@4/sd@2,0
13. c4t3d0 <HITACHI-HUS103030FL3800-SA1B-279.40GB>
/pci@6,4000/scsi@4/sd@3,0
14. c5t0d0 <HITACHI-HUS103030FL3800-SA1B-279.40GB>
/pci@6,4000/scsi@4,1/sd@0,0
15. c5t1d0 <HITACHI-HUS103030FL3800-SA1B-279.40GB>
/pci@6,4000/scsi@4,1/sd@1,0
16. c5t2d0 <HITACHI-HUS103030FL3800-SA1B-279.40GB>
/pci@6,4000/scsi@4,1/sd@2,0
17. c5t3d0 <HITACHI-HUS103030FL3800-SA1B-279.40GB>
/pci@6,4000/scsi@4,1/sd@3,0

Fine, I can run without the root pool mirrored and do a dump from the
active drive to the standby each night but why does it not work mirrored?



/michael

From: cindy on
On Apr 7, 2:38 pm, Michael Laajanen <michael_laaja...(a)yahoo.com>
wrote:
> Hi,
>
>
>
> cindy wrote:
> > On Apr 7, 1:54 pm, Michael Laajanen <michael_laaja...(a)yahoo.com>
> > wrote:
> >> Hi,
>
> >> cindy wrote:
> >>> On Apr 7, 1:14 pm, Michael Laajanen <michael_laaja...(a)yahoo.com>
> >>> wrote:
> >>>> Hi,
> >>> Thanks,
> >>> Cindy
> >> Right, I am also stumped!
>
> >> bash-3.00# zpool attach -f rpool c0t0d0s0 c0t1d0s0
> >> cannot attach c0t1d0s0 to c0t0d0s0: c0t1d0s0 is busy
> >> bash-3.00# zpool create foo c0t1d0s0
> >> bash-3.00# zpool status foo
> >>    pool: foo
> >>   state: ONLINE
> >>   scrub: none requested
> >> config:
>
> >>          NAME        STATE     READ WRITE CKSUM
> >>          foo         ONLINE       0     0     0
> >>            c0t1d0s0  ONLINE       0     0     0
>
> >> errors: No known data errors
> >> bash-3.00# zpool destroy foo
> >> bash-3.00# zpool status foo
> >> cannot open 'foo': no such pool
> >> bash-3.00# zpool attach -f rpool c0t0d0s0 c0t1d0s0
> >> cannot attach c0t1d0s0 to c0t0d0s0: c0t1d0s0 is busy
> >> bash-3.00# cat /etc/release
> >>                        Solaris 10 10/09 s10s_u8wos_08a SPARC
> >>             Copyright 2009 Sun Microsystems, Inc.  All Rights Reserved.
> >>                          Use is subject to license terms.
> >>                             Assembled 16 September 2009
>
> >> /michael
>
> > Michael,
>
> > I did a google search on zpool attach, rpool, cannot attach, is busy
> > and found two errors conditions:
> > 1. the disks were not specified in the correct order, but this is not
> > your problem
>
> right
>
> > 2. this problem was reported with an Areca controller and I didn't see
> > a resolution reported.
>
> This is a E450, all Sun gear but the disks are new 15K 73GB Seagates(
> SEAGATE-ST373455LC-0003) for the root pool.
> I have 16 300GB drives in the main pool all working fine!
>
>
>
> > Works fine on my t2000 running identical bits as you:
> > # cat /etc/release
> >                       Solaris 10 10/09 s10s_u8wos_08a SPARC
> >            Copyright 2009 Sun Microsystems, Inc.  All Rights Reserved.
> >                         Use is subject to license terms.
> >                            Assembled 16 September 2009
> > # zpool attach rpool c1t0d0s0 c1t1d0s0
> > Please be sure to invoke installboot(1M) to make 'c1t1d0s0' bootable.
> > # zpool status rpool
> >   pool: rpool
> >  state: ONLINE
> > status: One or more devices is currently being resilvered.  The pool
> > will
> >         continue to function, possibly in a degraded state.
> > action: Wait for the resilver to complete.
> >  scrub: resilver in progress for 0h0m, 3.41% done, 0h13m to go
> > config:
>
> >         NAME          STATE     READ WRITE CKSUM
> >         rpool         ONLINE       0     0     0
> >           mirror      ONLINE       0     0     0
> >             c1t0d0s0  ONLINE       0     0     0
> >             c1t1d0s0  ONLINE       0     0     0  1.27G resilvered
>
> > errors: No known data errors
>
> The funny thing is that the installer also fail!
>
>         0. c0t0d0 <SEAGATE-ST373455LC-0003 cyl 65533 alt 2 hd 2 sec 1093>
>            /pci(a)1f,4000/scsi@3/sd@0,0
>         1. c0t1d0 <SEAGATE-ST373455LC-0003 cyl 65533 alt 2 hd 2 sec 1093>
>            /pci(a)1f,4000/scsi@3/sd@1,0
>         2. c2t0d0 <FUJITSU-MAW3300NC-0104-279.40GB>
>            /pci@6,4000/scsi@3/sd@0,0
>         3. c2t1d0 <FUJITSU-MAW3300NC-0104-279.40GB>
>            /pci@6,4000/scsi@3/sd@1,0
>         4. c2t2d0 <FUJITSU-MAW3300NC-0104-279.40GB>
>            /pci@6,4000/scsi@3/sd@2,0
>         5. c2t3d0 <FUJITSU-MAW3300NC-0104-279.40GB>
>            /pci@6,4000/scsi@3/sd@3,0
>         6. c3t0d0 <FUJITSU-MAW3300NC-0104-279.40GB>
>            /pci@6,4000/scsi@3,1/sd@0,0
>         7. c3t1d0 <FUJITSU-MAW3300NC-0104-279.40GB>
>            /pci@6,4000/scsi@3,1/sd@1,0
>         8. c3t2d0 <FUJITSU-MAW3300NC-0104-279.40GB>
>            /pci@6,4000/scsi@3,1/sd@2,0
>         9. c3t3d0 <FUJITSU-MAW3300NC-0104-279.40GB>
>            /pci@6,4000/scsi@3,1/sd@3,0
>        10. c4t0d0 <HITACHI-HUS103030FL3800-SA1B-279.40GB>
>            /pci@6,4000/scsi@4/sd@0,0
>        11. c4t1d0 <FUJITSU-MAW3300NC-0104-279.40GB>
>            /pci@6,4000/scsi@4/sd@1,0
>        12. c4t2d0 <MAXTOR-ATLAS10K5_300SCA-JNZH-279.40GB>
>            /pci@6,4000/scsi@4/sd@2,0
>        13. c4t3d0 <HITACHI-HUS103030FL3800-SA1B-279.40GB>
>            /pci@6,4000/scsi@4/sd@3,0
>        14. c5t0d0 <HITACHI-HUS103030FL3800-SA1B-279.40GB>
>            /pci@6,4000/scsi@4,1/sd@0,0
>        15. c5t1d0 <HITACHI-HUS103030FL3800-SA1B-279.40GB>
>            /pci@6,4000/scsi@4,1/sd@1,0
>        16. c5t2d0 <HITACHI-HUS103030FL3800-SA1B-279.40GB>
>            /pci@6,4000/scsi@4,1/sd@2,0
>        17. c5t3d0 <HITACHI-HUS103030FL3800-SA1B-279.40GB>
>            /pci@6,4000/scsi@4,1/sd@3,0
>
> Fine, I can run without the root pool mirrored and do a dump from the
> active drive to the standby each night but why does it not work mirrored?
>
> /michael

Michael,

Oh right, the install program create the root pool mirror either, but
it
probably just calls the zpool attach operation.

Something about this disk is not connected properly but I'm not sure
how to
troubleshoot from here.

Maybe the hardware experts on this list can comment.

Cindy
From: Stefaan A Eeckels on
On Wed, 7 Apr 2010 13:13:42 -0700 (PDT)
cindy <cindy.swearingen(a)sun.com> wrote:

> I did a google search on zpool attach, rpool, cannot attach, is busy
> and found two errors conditions:
> 1. the disks were not specified in the correct order, but this is not
> your problem
> 2. this problem was reported with an Areca controller and I didn't see
> a resolution reported.

I had a similar problem (disk busy on attach) after a V440 crashed. The
system has a zpool created from 6 disks on a StorEdge 3320, and after
it came back up one of the disks was shown as failed. There was nothing
wrong with it, however, as it showed up OK under format and was
readable with dd.

As in this case, zpool attach (with and without -f) worked, zpool
claiming that the disk was in use. I finally managed to solve the
problem by exporting the zpool, re-labelling the "failed" disk with
fmthard, and then importing the zpool. After this (rather scary, but I
had made a backup) sequence the disk attached without problems and
re-silvered.

My guess is that the crash caused some subtle changes to the disk that
made ZFS think it was invalid, but also that it was still in use in the
zpool.

--
Stefaan A Eeckels
--
"Object-oriented programming is an exceptionally bad idea which
could only have originated in California." --Edsger Dijkstra
From: Michael Laajanen on
Hi,
Stefaan A Eeckels wrote:
> On Wed, 7 Apr 2010 13:13:42 -0700 (PDT)
> cindy <cindy.swearingen(a)sun.com> wrote:
>
>> I did a google search on zpool attach, rpool, cannot attach, is busy
>> and found two errors conditions:
>> 1. the disks were not specified in the correct order, but this is not
>> your problem
>> 2. this problem was reported with an Areca controller and I didn't see
>> a resolution reported.
>
> I had a similar problem (disk busy on attach) after a V440 crashed. The
> system has a zpool created from 6 disks on a StorEdge 3320, and after
> it came back up one of the disks was shown as failed. There was nothing
> wrong with it, however, as it showed up OK under format and was
> readable with dd.
>
> As in this case, zpool attach (with and without -f) worked, zpool
> claiming that the disk was in use. I finally managed to solve the
> problem by exporting the zpool, re-labelling the "failed" disk with
> fmthard, and then importing the zpool. After this (rather scary, but I
> had made a backup) sequence the disk attached without problems and
> re-silvered.
>
Hmm, you say you exported the pool can that be done while running on the
pool(rpool)?


> My guess is that the crash caused some subtle changes to the disk that
> made ZFS think it was invalid, but also that it was still in use in the
> zpool.
>
I tried this without success.

bash-3.00# prtvtoc /dev/rdsk/c0t0d0s0 | fmthard -s - /dev/rdsk/c0t1d0s0
fmthard: New volume table of contents now in place.
bash-3.00# zpool attach -f rpool c0t0d0s0 c0t1d0s0
cannot attach c0t1d0s0 to c0t0d0s0: c0t1d0s0 is busy


/michael
From: Victor on
On Apr 9, 4:36 am, Michael Laajanen <michael_laaja...(a)yahoo.com>
wrote:
> Hi,
>
>
>
> Stefaan A Eeckels wrote:
> > On Wed, 7 Apr 2010 13:13:42 -0700 (PDT)
> > cindy <cindy.swearin...(a)sun.com> wrote:
>
> >> I did a google search on zpool attach, rpool, cannot attach, is busy
> >> and found two errors conditions:
> >> 1. the disks were not specified in the correct order, but this is not
> >> your problem
> >> 2. this problem was reported with an Areca controller and I didn't see
> >> a resolution reported.
>
> > I had a similar problem (disk busy on attach) after a V440 crashed. The
> > system has a zpool created from 6 disks on a StorEdge 3320, and after
> > it came back up one of the disks was shown as failed. There was nothing
> > wrong with it, however, as it showed up OK under format and was
> > readable with dd.
>
> > As in this case, zpool attach (with and without -f) worked, zpool
> > claiming that the disk was in use. I finally managed to solve the
> > problem by exporting the zpool, re-labelling the "failed" disk with
> > fmthard, and then importing the zpool. After this (rather scary, but I
> > had made a backup) sequence the disk attached without problems and
> > re-silvered.
>
> Hmm, you say you exported the pool can that be done while running on the
>   pool(rpool)?
>
> > My guess is that the crash caused some subtle changes to the disk that
> > made ZFS think it was invalid, but also that it was still in use in the
> > zpool.
>
> I tried this without success.
>
> bash-3.00# prtvtoc /dev/rdsk/c0t0d0s0 | fmthard -s - /dev/rdsk/c0t1d0s0
> fmthard:  New volume table of contents now in place.
> bash-3.00# zpool attach -f rpool c0t0d0s0 c0t1d0s0
> cannot attach c0t1d0s0 to c0t0d0s0: c0t1d0s0 is busy
>
> /michael- Hide quoted text -
>
> - Show quoted text -

Interesting! Per your second post,

bash-3.00# zpool status rpool
pool: rpool
state: ONLINE
scrub: none requested
config:


NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c0t1d0s0 ONLINE 0 0 0

c0t1d0s0 is ONLINE, but in later posts, you are trying to attach it to
rpool.

Seems that the box is not in production yet. Then you can just
jumpstart to reinstall it, specifying mirror in the profile.

Victor