From: Michael Laajanen on
Hi,

Victor wrote:
> On Apr 9, 4:36 am, Michael Laajanen <michael_laaja...(a)yahoo.com>
> wrote:
>> Hi,
>>
>>
>>
>> Stefaan A Eeckels wrote:
>>> On Wed, 7 Apr 2010 13:13:42 -0700 (PDT)
>>> cindy <cindy.swearin...(a)sun.com> wrote:
>>>> I did a google search on zpool attach, rpool, cannot attach, is busy
>>>> and found two errors conditions:
>>>> 1. the disks were not specified in the correct order, but this is not
>>>> your problem
>>>> 2. this problem was reported with an Areca controller and I didn't see
>>>> a resolution reported.
>>> I had a similar problem (disk busy on attach) after a V440 crashed. The
>>> system has a zpool created from 6 disks on a StorEdge 3320, and after
>>> it came back up one of the disks was shown as failed. There was nothing
>>> wrong with it, however, as it showed up OK under format and was
>>> readable with dd.
>>> As in this case, zpool attach (with and without -f) worked, zpool
>>> claiming that the disk was in use. I finally managed to solve the
>>> problem by exporting the zpool, re-labelling the "failed" disk with
>>> fmthard, and then importing the zpool. After this (rather scary, but I
>>> had made a backup) sequence the disk attached without problems and
>>> re-silvered.
>> Hmm, you say you exported the pool can that be done while running on the
>> pool(rpool)?
>>
>>> My guess is that the crash caused some subtle changes to the disk that
>>> made ZFS think it was invalid, but also that it was still in use in the
>>> zpool.
>> I tried this without success.
>>
>> bash-3.00# prtvtoc /dev/rdsk/c0t0d0s0 | fmthard -s - /dev/rdsk/c0t1d0s0
>> fmthard: New volume table of contents now in place.
>> bash-3.00# zpool attach -f rpool c0t0d0s0 c0t1d0s0
>> cannot attach c0t1d0s0 to c0t0d0s0: c0t1d0s0 is busy
>>
>> /michael- Hide quoted text -
>>
>> - Show quoted text -
>
> Interesting! Per your second post,
>
> bash-3.00# zpool status rpool
> pool: rpool
> state: ONLINE
> scrub: none requested
> config:
>
>
> NAME STATE READ WRITE CKSUM
> rpool ONLINE 0 0 0
> c0t1d0s0 ONLINE 0 0 0
>
> c0t1d0s0 is ONLINE, but in later posts, you are trying to attach it to
> rpool.
>
> Seems that the box is not in production yet. Then you can just
> jumpstart to reinstall it, specifying mirror in the profile.
>
> Victor
The machine is running as NFS server but not yet as NIS server which is
still on the old server.

I have tried install 2-3 times, and the latest time c0t1 came up as
working root drive the previous it was c0t0 that why.

Both c0t0 and c0t1 where set as boot drives, but it does not work for
some reason.

There must be a way to force c0t0d0s0 to start as mirror of c0t1d0s0!

/michael
From: Voropaev Pavel on
On Apr 10, 12:27 am, Michael Laajanen <michael_laaja...(a)yahoo.com>
wrote:
> Hi,
>
>
>
> Victor wrote:
> > On Apr 9, 4:36 am, Michael Laajanen <michael_laaja...(a)yahoo.com>
> > wrote:
> >> Hi,
>
> >> Stefaan A Eeckels wrote:
> >>> On Wed, 7 Apr 2010 13:13:42 -0700 (PDT)
> >>> cindy <cindy.swearin...(a)sun.com> wrote:
> >>>> I did a google search on zpool attach, rpool, cannot attach, is busy
> >>>> and found two errors conditions:
> >>>> 1. the disks were not specified in the correct order, but this is not
> >>>> your problem
> >>>> 2. this problem was reported with an Areca controller and I didn't see
> >>>> a resolution reported.
> >>> I had a similar problem (disk busy on attach) after a V440 crashed. The
> >>> system has a zpool created from 6 disks on a StorEdge 3320, and after
> >>> it came back up one of the disks was shown as failed. There was nothing
> >>> wrong with it, however, as it showed up OK under format and was
> >>> readable with dd.
> >>> As in this case, zpool attach (with and without -f) worked, zpool
> >>> claiming that the disk was in use. I finally managed to solve the
> >>> problem by exporting the zpool, re-labelling the "failed" disk with
> >>> fmthard, and then importing the zpool. After this (rather scary, but I
> >>> had made a backup) sequence the disk attached without problems and
> >>> re-silvered.
> >> Hmm, you say you exported the pool can that be done while running on the
> >>   pool(rpool)?
>
> >>> My guess is that the crash caused some subtle changes to the disk that
> >>> made ZFS think it was invalid, but also that it was still in use in the
> >>> zpool.
> >> I tried this without success.
>
> >> bash-3.00# prtvtoc /dev/rdsk/c0t0d0s0 | fmthard -s - /dev/rdsk/c0t1d0s0
> >> fmthard:  New volume table of contents now in place.
> >> bash-3.00# zpool attach -f rpool c0t0d0s0 c0t1d0s0
> >> cannot attach c0t1d0s0 to c0t0d0s0: c0t1d0s0 is busy
>
> >> /michael- Hide quoted text -
>
> >> - Show quoted text -
>
> > Interesting! Per your second post,
>
> > bash-3.00# zpool status rpool
> >    pool: rpool
> >   state: ONLINE
> >   scrub: none requested
> > config:
>
> >          NAME        STATE     READ WRITE CKSUM
> >          rpool       ONLINE       0     0     0
> >            c0t1d0s0  ONLINE       0     0     0
>
> > c0t1d0s0 is ONLINE, but in later posts, you are trying to attach it to
> > rpool.
>
> > Seems that the box is not in production yet.  Then you can just
> > jumpstart to reinstall it, specifying mirror in the profile.
>
> > Victor
>
> The machine is running as NFS server but not yet as NIS server which is
> still on the old server.
>
> I have tried install 2-3 times, and the latest time c0t1 came up as
> working root drive the previous it was c0t0 that why.
>
> Both c0t0 and c0t1 where set as boot drives, but it does not work for
> some reason.
>
> There must be a way to force c0t0d0s0 to start as mirror of c0t1d0s0!
>
> /michael

Your can try SVM.