From: "Kevin Grittner" on
Chris Browne <cbbrowne(a)acm.org> wrote:
> robertmhaas(a)gmail.com (Robert Haas) writes:

>> I suspect Kevin's patch will solve it if using a sufficiently
>> high transaction isolation level, but something else might be
>> needed otherwise. However, I confess to ignorance as to the
>> underlying issues? Why is MERGE worse in this regard than, say,
>> UPDATE?
>
> It's worse than UPDATE because
> - It could be an INSERT, if the data's new, but
> - If the data's there, it becomes an UPDATE, but
> - If some concurrent update has just DELETEd the data that's
> there, it becomes an INSERT again, but
> - Oops, that DELETE rolled bac, so it's an UPDATE again...
>
> Recurse as needed to make it more undecidable as to whether it's
> really an INSERT or an UPDATE :-).

Not to get too far into the serializable issues, but the server
won't do any such recursion with the serializable patch. Each
serializable transaction would have its own snapshot where the row
was there or it wasn't, and would act accordingly. If they took
conflicting actions on the same row, one of them might be rolled
back with a serialization failure. The client is likely to want to
retry the operation based on the SQLSTATE indicating serialization
failure, which (as the patch stands now) could result in some
head-banging if the client doesn't introduce some delay first. I
have an optimization in mind (described on the Wiki page) which
could help with that, but its impact on overall performance is
uncertain, so I don't want to mess with that until we have more
benchmarks in place for realistic loads which might use serializable
isolation.

So... No, it's not directly a problem on the server itself. Yes, a
client can make it a problem by resubmitting failed queries "too
quickly". But, we might be able to fix that with additional work.

-Kevin

--
Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

From: Josh Berkus on

> At 2010 Dev Mtg, we put me down to work on making merge work
> concurrently. That was garbled slightly and had me down as working on
> predicate locking which is the general solution to the problem.

Well, we *still* want predicate locking regardless of what MERGE
supports. It's useful in about 9 different ways.

--
-- Josh Berkus
PostgreSQL Experts Inc.
http://www.pgexperts.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

From: "Kevin Grittner" on
I wrote:

> So... No, it's not directly a problem on the server itself.

I just had a thought -- the MERGE code isn't doing anything fancy
with snapshots, is it? I haven't been tracking that discussion too
closely or read the patch. My previous comments assume that the
*snapshot* is stable for the duration of a MERGE command, at least
if the transaction isolation level is serializable.

-Kevin

--
Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

From: "Kevin Grittner" on
Josh Berkus <josh(a)agliodbs.com> wrote:

> Well, we *still* want predicate locking regardless of what MERGE
> supports. It's useful in about 9 different ways.

I don't know whether this is the right time to discuss those 9
different uses, but just so everyone knows, the SIRead locks needed
for the SSI implementation in the current serializable patch have
some characteristics which may be exactly what you want (if you want
cache invalidation or some such) or may render them totally useless
from some purposes.

(1) They don't block anything. Ever. Conflicts with writes are
detected, and right now that is used to mark rw-conflicts between
serializable transactions. I assume we may want to add listeners
who can be signaled on such conflicts, too; but that isn't there
now.

(2) They are only acquired by serializable transactions.

(3) They can survive the transaction which acquired them, and even
the termination of the process which ran the transaction. Right now
they go away when the last serializable transaction which overlapped
the acquiring serializable transaction completes. If we add
listeners, I assume we'd want to keep them as long as a listener was
registered, probably with some timeout feature.

Just so everyone knows what is and isn't there right now.

-Kevin

--
Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

From: Josh Berkus on
On 8/5/10 12:33 PM, Kevin Grittner wrote:
> I don't know whether this is the right time to discuss those 9
> different uses, but just so everyone knows, the SIRead locks needed
> for the SSI implementation in the current serializable patch have
> some characteristics which may be exactly what you want (if you want
> cache invalidation or some such) or may render them totally useless
> from some purposes.

Yeah, I haven't wrapped my head around your stuff enough yet. I would
say that having such locks available only for serializable transactions
limits some of the uses I'm thinking of.

Anyway, here's some of the uses I'm thinking of:

(1) Pre-insert lock: you know that you're going to insert a record with
PK="X" later in a long-running SP, so you want to lock out other inserts
of PK="X" at the beginning of the procedure.

(2) FK Locking: you plan to modify or delete a parent FK record in this
transaction, so you want to prevent any updates or inserts on its
related child records. (in my experience, FK-releated sharelocks are the
#1 cause of deadlocking).

(3) No-duplicate queueing: you want to create a queue table which
doesn't accept duplicate events, but you don't want it to be a source of
deadlocks. This is a variant of (1), but a common case.

(4) Blackouts: records of type "x" aren't supposed to be created during
period "y to y1" or while procedure "z" is running. Predicate locking
can be used to prevent this more easily than adding and removing a trigger.

(5) Debugging: (variant of 4) records of type "x" keep getting inserted
in the table, and you don't know where they're coming from. You can
predicate lock to force an error and debug it.

.... that's off the top of my head.

--
-- Josh Berkus
PostgreSQL Experts Inc.
http://www.pgexperts.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers