Prev: gcc3.3.2 compatibility with libgcc_s.so.1 and libstdc++so.5
Next: ZFS. split a mirror, and remount "2nd" half
From: jay on 30 Apr 2010 14:48 here's how it's supposed to work: on the 1st machine: zpool export ship move the disk to the 2nd machine, and over there: zpool import ship i have this note from oracle support that says you need the SUNWXCall cluster installed on both -- that is the biggest cluster you can choose during install (except maybe the one w/ 3rd party drivers). any truth to this? unfortunately, i have only one test machine. i have the developer cluster, not the entire cluster. on the single machine, it works. /etc/release sez: Solaris 10 5/09 s10s_u7wos_08 thanks in advance.
From: webjuan on 30 Apr 2010 15:18 On Apr 30, 2:48 pm, jay <bigcra...(a)gmail.com> wrote: > here's how it's supposed to work: > > on the 1st machine: > zpool export ship > > move the disk to the 2nd machine, and over there: > > zpool import ship > > i have this note from oracle support that says > you need the SUNWXCall cluster installed on both -- that is > the biggest cluster you can choose during > install (except maybe the one w/ 3rd party > drivers). any truth to this? unfortunately, i have > only one test machine. i have the developer cluster, > not the entire cluster. on the single machine, it works. > > /etc/release sez: Solaris 10 5/09 s10s_u7wos_08 > > thanks in advance. I havent heard about having SUNWXCall as a requirement on both machines, sounds a little too excessive to me. However, I could be wrong. One thing you might want to ensure is that both systems have the same ZFS version or ensure the 2nd machine has a newer version. For example, on my Solaris 10 5/09 box: (labbox)# zfs upgrade This system is currently running ZFS filesystem version 3. All filesystems are formatted with the current version. (labbox)# zfs upgrade -v The following filesystem versions are supported: VER DESCRIPTION --- -------------------------------------------------------- 1 Initial ZFS filesystem version 2 Enhanced directory entries 3 Case insensitive and File system unique identifer (FUID) For more information on a particular version, including supported releases, see: http://www.opensolaris.org/os/community/zfs/version/zpl/N Where 'N' is the version number. Hope that points you in the right direction. juan martinez
From: jay on 30 Apr 2010 16:19 On Apr 30, 2:18 pm, webjuan <webj...(a)gmail.com> wrote: > On Apr 30, 2:48 pm, jay <bigcra...(a)gmail.com> wrote: > > > > > here's how it's supposed to work: > > > on the 1st machine: > > zpool export ship > > > move the disk to the 2nd machine, and over there: > > > zpool import ship > > > i have this note from oracle support that says > > you need the SUNWXCall cluster installed on both -- that is > > the biggest cluster you can choose during > > install (except maybe the one w/ 3rd party > > drivers). any truth to this? unfortunately, i have > > only one test machine. i have the developer cluster, > > not the entire cluster. on the single machine, it works. > > > /etc/release sez: Solaris 10 5/09 s10s_u7wos_08 > > > thanks in advance. > > I havent heard about having SUNWXCall as a requirement on both > machines, sounds a little too excessive to me. However, I could be > wrong. One thing you might want to ensure is that both systems have > the same ZFS version or ensure the 2nd machine has a newer version. > > For example, on my Solaris 10 5/09 box: > > (labbox)# zfs upgrade > This system is currently running ZFS filesystem version 3. > > All filesystems are formatted with the current version. > > (labbox)# zfs upgrade -v > The following filesystem versions are supported: > > VER DESCRIPTION > --- -------------------------------------------------------- > 1 Initial ZFS filesystem version > 2 Enhanced directory entries > 3 Case insensitive and File system unique identifer (FUID) > > For more information on a particular version, including supported > releases, see: > > http://www.opensolaris.org/os/community/zfs/version/zpl/N > > Where 'N' is the version number. > > Hope that points you in the right direction. > > juan martinez that SUNWCXall was excessive was my reaction, too. however, it occurs to me that i might be able to get a little closer to the truth on a machine at another site. i can't actually move the disk between machines but i can move it from one disk slot to another. maybe that will look "different enough" to tell me something. j.
From: Ian Collins on 30 Apr 2010 17:27 On 05/ 1/10 06:48 AM, jay wrote: > here's how it's supposed to work: > > on the 1st machine: > zpool export ship > > move the disk to the 2nd machine, and over there: > > zpool import ship > > i have this note from oracle support that says > you need the SUNWXCall cluster installed on both -- that is > the biggest cluster you can choose during > install (except maybe the one w/ 3rd party > drivers). any truth to this? unfortunately, i have > only one test machine. i have the developer cluster, > not the entire cluster. on the single machine, it works. > > /etc/release sez: Solaris 10 5/09 s10s_u7wos_08 I've never heard that one before. I've been swapping ZFS drives between machines sice ZFS first appeared. I assume your subject is misleading and you are moving a pool between systems. -- Ian Collins
From: Richard B. Gilbert on 30 Apr 2010 17:47 Ian Collins wrote: > On 05/ 1/10 06:48 AM, jay wrote: >> here's how it's supposed to work: >> >> on the 1st machine: >> zpool export ship >> >> move the disk to the 2nd machine, and over there: >> >> zpool import ship >> >> i have this note from oracle support that says >> you need the SUNWXCall cluster installed on both -- that is TYPO ALERT!!!!!!! Believe that should read SUNWCXALL Dylsexia alert?? ;-)
|
Next
|
Last
Pages: 1 2 3 4 Prev: gcc3.3.2 compatibility with libgcc_s.so.1 and libstdc++so.5 Next: ZFS. split a mirror, and remount "2nd" half |