From: Mark Jacobs on
I'm running snv_118 without any problems with my hardware but every
later release that I've tried gives me an error on the last drive of my
raidz2 pool. I just tried the latest snv_134 snapshot and once again my
pool came up in degraded mode.

Earlier attempts to run later snapshots gave me checksum errors, so
before ungrading today I ran a zpool scrub which reported no errors in
the pool. Under snv_134 its a different error, but the same end result.
When I go back to snv_118 the pool comes up fine.

Any ideas?

--------

Mark Jacobs


-----

mark(a)opensolaris:~# zpool status mark

pool: mark

state: DEGRADED

status: One or more devices could not be opened. Sufficient replicas exist for

the pool to continue functioning in a degraded state.

action: Attach the missing device and online it using 'zpool online'.

see: http://www.sun.com/msg/ZFS-8000-2Q

scrub: none requested

config:



NAME STATE READ WRITE CKSUM

mark DEGRADED 0 0 0

raidz2-0 DEGRADED 0 0 0

c8t0d0 ONLINE 0 0 0

c8t1d0 ONLINE 0 0 0

c8t2d0 ONLINE 0 0 0

c8t3d0 ONLINE 0 0 0

c8t4d0 ONLINE 0 0 0

c8t5d0 UNAVAIL 0 0 0 cannot open



errors: No known data errors

fmadm faulty -a

--------------- ------------------------------------ -------------- ---------

TIME EVENT-ID MSG-ID SEVERITY

--------------- ------------------------------------ -------------- ---------

Mar 28 18:04:14 72a847c9-72b8-c9f9-e129-e32a9527de89 ZFS-8000-D3 Major



Host : opensolaris

Platform : System-Product-Name Chassis_id : System-Serial-Number

Product_sn :



Fault class : fault.fs.zfs.device

Affects : zfs://pool=mark/vdev=ec8808ee9f058d20

faulted and taken out of service

Problem in : zfs://pool=mark/vdev=ec8808ee9f058d20

faulted and taken out of service



Description : A ZFS device failed. Refer to http://sun.com/msg/ZFS-8000-D3 for

more information.



Response : No automated response will occur.



Impact : Fault tolerance of the pool may be compromised.



Action : Run 'zpool status -x' and replace the bad device.





Nov 01 2009 14:59:02.575987336 ereport.fs.zfs.checksum

nvlist version: 0

class = ereport.fs.zfs.checksum

ena = 0x10ea09ddfa00401

detector = (embedded nvlist)

nvlist version: 0

version = 0x0

scheme = zfs

pool = 0x6f39e1c7b0ed3a51

vdev = 0xec8808ee9f058d20

(end detector)



pool = mark

pool_guid = 0x6f39e1c7b0ed3a51

pool_context = 1

pool_failmode = wait

vdev_guid = 0xec8808ee9f058d20

vdev_type = disk

vdev_path = /dev/dsk/c8t5d0s0

vdev_devid = id1,sd(a)SATA_____Hitachi_HDT72101______STF610MR1Z72NP/a

parent_guid = 0xd00b4d053e99f0b

parent_type = raidz

zio_err = 0

zio_offset = 0xd555b4000

zio_size = 0x200

zio_objset = 0x0

zio_object = 0x0

zio_level = 0

zio_blkid = 0x9

__ttl = 0x1

__tod = 0x4aede886 0x2254de88



Mar 28 2010 18:03:37.491131186 ereport.fs.zfs.vdev.open_failed

nvlist version: 0

class = ereport.fs.zfs.vdev.open_failed

ena = 0x215642c0ee00001

detector = (embedded nvlist)

nvlist version: 0

version = 0x0

scheme = zfs

pool = 0x6f39e1c7b0ed3a51

vdev = 0xec8808ee9f058d20

(end detector)



pool = mark

pool_guid = 0x6f39e1c7b0ed3a51

pool_context = 1

pool_failmode = wait

vdev_guid = 0xec8808ee9f058d20

vdev_type = disk

vdev_path = /dev/dsk/c8t5d0s0

vdev_devid = id1,sd(a)SATA_____Hitachi_HDT72101______STF610MR1Z72NP/a

parent_guid = 0xd00b4d053e99f0b

parent_type = raidz

prev_state = 0x1

__ttl = 0x1

__tod = 0x4bafd239 0x1d461132



Mar 28 2010 18:03:37.491130788 ereport.fs.zfs.vdev.open_failed

nvlist version: 0

class = ereport.fs.zfs.vdev.open_failed

ena = 0x215642c0ee00001

detector = (embedded nvlist)

nvlist version: 0

version = 0x0

scheme = zfs

pool = 0x6f39e1c7b0ed3a51

vdev = 0xec8808ee9f058d20

(end detector)



pool = mark

pool_guid = 0x6f39e1c7b0ed3a51

pool_context = 1

pool_failmode = wait

vdev_guid = 0xec8808ee9f058d20

vdev_type = disk

vdev_path = /dev/dsk/c8t5d0s0

vdev_devid = id1,sd(a)SATA_____Hitachi_HDT72101______STF610MR1Z72NP/a

parent_guid = 0xd00b4d053e99f0b

parent_type = raidz

prev_state = 0x1

__ttl = 0x1

__tod = 0x4bafd239 0x1d460fa4