Prev: [HACKERS] ToDo: preload for fulltext dictionary
Next: [HACKERS] plperl message style on newly added messages
From: Tom Lane on 15 Feb 2010 12:49 Joachim Wieland <joe(a)mcknight.de> writes: > One question regarding #2: Is a client application able to tell > whether or not it has received all notifications from one batch? i.e. > does PQnotifies() return NULL only when the backend has sent over the > complete batch of notifications or could it also return NULL while a > batch is still being transmitted but the client-side buffer just > happens to be empty? That's true, it's difficult for the client to be sure whether it's gotten all the available notifications. It could wait a little bit to see if more arrive but there's no sure upper bound for how long is enough. If you really need it, though, you could send a query (perhaps just a dummy empty-string query). In the old implementation, the query response would mark a point of guaranteed consistency in the notification responses: you would have gotten all or none of the messages from any particular sending transaction, and furthermore there could not be any missing messages from transactions that committed before one that you saw a message from. The latter property is probably the bigger issue really, and I'm afraid that even with contiguous queuing we'd not be able to guarantee it, so maybe we have a problem even with my proposed #2 fix. Maybe we should go back to the existing scheme whereby a writer takes a lock it holds through commit, so that entries in the queue are guaranteed to be in commit order. It wouldn't lock out readers just other writers. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
From: Joachim Wieland on 15 Feb 2010 06:33 On Mon, Feb 15, 2010 at 3:31 AM, Tom Lane <tgl(a)sss.pgh.pa.us> wrote: > I'm not sure how probable it is that applications might be coded in a > way that relies on the properties lost according to point #2 or #3. Your observations are all correct as far as I can tell. One question regarding #2: Is a client application able to tell whether or not it has received all notifications from one batch? i.e. does PQnotifies() return NULL only when the backend has sent over the complete batch of notifications or could it also return NULL while a batch is still being transmitted but the client-side buffer just happens to be empty? > We could fix #2 by not releasing AsyncQueueLock between pages when > queuing messages. This has no obvious downsides as far as I can see; > if anything it ought to save some cycles and contention. Currently transactions with a small number of notifications can deliver their notifications and then proceed with their commit while transactions with many notifications need to stay there longer, so the current behavior is fair in this respect. Changing the locking strategy makes the small volume transactions wait for the bigger ones. Also currently readers can already start reading while writers are still writing (until they hit the first uncommitted transaction of their database). > I think preserving the > property that self-notifies are delivered immediately upon commit might > be more important than that. Fine with me, sounds reasonable :-) Joachim -- Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
From: Tom Lane on 16 Feb 2010 00:20 I wrote: > ... > 3. It is possible for a backend's own self-notifies to not be delivered > immediately after commit, if they are queued behind some other > uncommitted transaction's messages. That wasn't possible before either. > ... We could fix > #3 by re-instituting the special code path that previously existed for > self-notifies, ie send them to the client directly from AtCommit_Notify > and ignore self-notifies coming back from the queue. This would mean > that a backend might see its own self-notifies in a different order > relative to other backends' messages than other backends do --- but that > was the case in the old coding as well. I think preserving the > property that self-notifies are delivered immediately upon commit might > be more important than that. I modified the patch to do that, but after awhile realized that there are more worms in this can than I'd thought. What I had done was to add the NotifyMyFrontEnd() calls to the post-commit cleanup function for async.c. However, that is a horribly bad place to put it because of the non-negligible probability of a failure. An encoding conversion failure, for example, becomes a "PANIC: cannot abort transaction NNN, it was already committed". The reason we have not seen any such behavior in the field is that in the historical coding, self-notifies are actually sent *pre commit*. So if they do happen to fail you get a transaction rollback and no backend crash. Of course, if some notifies went out before we got to the one that failed, the app might have taken action based on a notify for some event that now didn't happen; so that's not exactly ideal either. So right now I'm not sure what to do. We could adopt the historical policy of sending self-notifies pre-commit, but that doesn't seem tremendously appetizing from the standpoint of transactional integrity. Or we could do it the way Joachim's submitted patch does, but I'm quite sure somebody will complain about the delay involved. Another possibility is to force a ProcessIncomingNotifies scan to occur before we reach ReadyForQuery if we sent any notifies in the just-finished transaction --- but that won't help if there are uncommitted messages in front of ours. So it would only really improve matters if we forced queuing order to match commit order, as I was speculating about earlier. Thoughts? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
From: Joachim Wieland on 16 Feb 2010 03:28 On Tue, Feb 16, 2010 at 6:20 AM, Tom Lane <tgl(a)sss.pgh.pa.us> wrote: > Another possibility is to force a ProcessIncomingNotifies scan to occur > before we reach ReadyForQuery if we sent any notifies in the > just-finished transaction --- but that won't help if there are > uncommitted messages in front of ours. What about dealing with self-notifies in memory? i.e. copy them into a subcontext of TopMemoryContext in precommit and commit as usual. Then as a first step in ProcessIncomingNotifies() deliver whatever is in memory and then delete the context. While reading the queue, ignore all self-notifies there. If we abort for some reason, delete the context in AtAbort_Notify(). Would that work? Joachim -- Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
From: "Kevin Grittner" on 16 Feb 2010 07:31 Tom Lane wrote: > We could adopt the historical policy of sending self-notifies > pre-commit, but that doesn't seem tremendously appetizing from the > standpoint of transactional integrity. But one traditional aspect of transactional integrity is that a transaction always sees *its own* uncommitted work. Wouldn't the historical policy of PostgreSQL self-notifies be consistent with that? -Kevin -- Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
|
Next
|
Last
Pages: 1 2 3 Prev: [HACKERS] ToDo: preload for fulltext dictionary Next: [HACKERS] plperl message style on newly added messages |