From: Richard B. Gilbert on
Zfs.. wrote:
> Folks,
>
> Some worrying behavior with zfs.
>
> While doing some testing we noticed that we where able to import the
> same zpool using the -f option to two separate machines at the same
> time.
>
> The zpool resides on SAN storage that both servers can see. Importing
> the zpool to one machine is no problem, but while imported there we
> can also import it to the second machine.. This is not good.
>
> We then decided to be wicked and mounted a filesystem on one side,
> wrote a file into it and then on the other side ran a scrub.. Needless
> to say, the server that tried to do the scrub crashed.
>
> So, basically, besides the obvious comment of "just don't do that", is
> there a way to "lock" a zpool down to a particular machine while it is
> imported ?
>
> I think you should be able to do this because if not then there is
> potential for some disastrous situations to arrive.
>
> Any thoughts are welcome !

Historically, Unix has been deficient in "locking" just about anything.
The last time I looked it was possible for two users to open a file with
write access at the same time. I cheerfully admit that it has been
years since I last looked!

Be prepared for disaster BEFORE you experiment!
From: Colin B. on
Zfs.. <cian.scripter(a)gmail.com> wrote:

....

> I know that we are using the -f option, which is forcing the import to
> the second node... however, there could be a situation where using the
> -f while the pool is still active on another system can be extremely
> dangerous !

Here's a brief description of the traditional Unix safety features.
"You can have as much rope as you want, but you have to ask for it."

The "-f" flag is asking for enough rope to hang yourself. It's up to
you to be smart enough to avoid doing so. Personally, I'll take the
responsibility (and the scars), rather than having a computer say "Oh
no, we don't think you should do that." (Go hunt down my rant on the
stupid protections built into live-upgrade/swap/format for an example.)

However having a property flag seems like a good idea. That should be
a doable feature. Still at the end of the day you have to assume that
if you need to 'dash eff' a command, you'd better be SURE you know exactly
what you're doing!

Colin
From: Zfs.. on
On Jan 18, 5:20 pm, Darren Dunham <darren.dun...(a)gmail.com> wrote:
> On Jan 18, 9:00 am, "Zfs.." <cian.scrip...(a)gmail.com> wrote:
>
> > While doing some testing we noticed that we where able to import the
> > same zpool using the -f option to two separate machines at the same
> > time.
>
> Given the "-f" option means "force", that doesn't surprise me.  You
> can do the same thing with Symantec Volume manager and the correct
> flags as well.

Thanks for the reply Darren. I do understand that I'm issuing a force
of the pool, and of course you must be sure that if you use this flag
you better know what you are doing. However, with zfs, this is the
ONLY way to import the pool to another host which seems funny to me.
Correct me if I'm wrong, but this has been my experience with zfs.

If you issue zpool import mypool on a pool that was accessed on
another system it will tell you to use the -f option.

This is something I think Sun need to look at, surely there has to be
another way to do this without issuing -f to the import.

>
> > The zpool resides on SAN storage that both servers can see. Importing
> > the zpool to one machine is no problem, but while imported there we
> > can also import it to the second machine.. This is not good.
>
> So don't use "-force".  If it doesn't work without -force, you should
> be asking yourself why and what is wrong.

As above, it is the way zfs works. Which is "odd"


>
> > We then decided to be wicked and mounted a filesystem on one side,
> > wrote a file into it and then on the other side ran a scrub.. Needless
> > to say, the server that tried to do the scrub crashed.
>
> Yes.  If the pool is imported simultaneously, it is almost guaranteed
> to be corrupted.

Funnily enough, it wasn't, the pool worked away perfectly well on the
second node. A scrub ran fine, I think when I imported the pool the
second node just grabbed the pool, leaving the first node in a kind of
zfs limbo. It thought it had the pool but when it went to access it
and realised it wasn't there it panicked as per the failmode property.
>
> > So, basically, besides the obvious comment of "just don't do that", is
> > there a way to "lock" a zpool down to a particular machine while it is
> > imported ?
>
> It is by default (at least in later versions).  But -f lets you
> override the lock.  There has to be some way to do this because the
> host holding the lock can crash.
>
> The first versions of ZFS did not have such a layer of protection and
> multiple non-forced imports would succeed.

Again, maybe some kind of lockhost property could be added to zfs to
prevent this, having the -f flag as a failsafe in case the host
crashes and you need to import the pool to another system..

The -f could override the lockhost property. I think it could be a
good safety feature. The property could be unset on an export of the
pool.
>
> --
> Darren

Thanks for your comments Darren.

From: Zfs.. on
On Jan 18, 5:26 pm, "Richard B. Gilbert" <rgilber...(a)comcast.net>
wrote:
> Zfs.. wrote:
> > Folks,
>
> > Some worrying behavior with zfs.
>
> > While doing some testing we noticed that we where able to import the
> > same zpool using the -f option to two separate machines at the same
> > time.
>
> > The zpool resides on SAN storage that both servers can see. Importing
> > the zpool to one machine is no problem, but while imported there we
> > can also import it to the second machine.. This is not good.
>
> > We then decided to be wicked and mounted a filesystem on one side,
> > wrote a file into it and then on the other side ran a scrub.. Needless
> > to say, the server that tried to do the scrub crashed.
>
> > So, basically, besides the obvious comment of "just don't do that", is
> > there a way to "lock" a zpool down to a particular machine while it is
> > imported ?
>
> > I think you should be able to do this because if not then there is
> > potential for some disastrous situations to arrive.
>
> > Any thoughts are welcome !
>
> Historically, Unix has been deficient in "locking" just about anything.
> The last time I looked it was possible for two users to open a file with
> write access at the same time.  I cheerfully admit that it has been
> years since I last looked!
>
> Be prepared for disaster BEFORE you experiment!

Hi Richard,

Yes, we have been prepared for disaster, and have encountered it many
times while testing. It's all fun though. Right ?

:-)
From: Zfs.. on
On Jan 18, 7:01 pm, "Colin B." <cbi...(a)somewhereelse.shaw.ca> wrote:
> Zfs.. <cian.scrip...(a)gmail.com> wrote:
>
> ...
>
> > I know that we are using the -f option, which is forcing the import to
> > the second node... however, there could be a situation where using the
> > -f while the pool is still active on another system can be extremely
> > dangerous !
>
> Here's a brief description of the traditional Unix safety features.
> "You can have as much rope as you want, but you have to ask for it."
>
> The "-f" flag is asking for enough rope to hang yourself. It's up to
> you to be smart enough to avoid doing so. Personally, I'll take the
> responsibility (and the scars), rather than having a computer say "Oh
> no, we don't think you should do that." (Go hunt down my rant on the
> stupid protections built into live-upgrade/swap/format for an example.)
>
> However having a property flag seems like a good idea. That should be
> a doable feature. Still at the end of the day you have to assume that
> if you need to 'dash eff' a command, you'd better be SURE you know exactly
> what you're doing!
>
> Colin

Cheers for the response Colin....

And thanks to the group, good to see there are still good admins out
there ready to help...