From: Tom Lane on
Stefan Kaltenbrunner <stefan(a)kaltenbrunner.cc> writes:
> Greg Stark wrote:
>> For what it's worth Oracle has an option to have your standby
>> intentionally hold back n minutes behind and I've seen that set to 5
>> minutes.

> yeah a lot of people are doing that intentionally...

It's the old DBA screwup safety valve ... drop the main accounts table,
you have five minutes to stop replication before it's dropped on the
standby. Speaking of which, does the current HS+SR code have a
provision to force the standby to stop tracking WAL and come up live,
even when there's more WAL available? Because that's what you'd need
in order for such a thing to be helpful in that scenario.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

From: "Kevin Grittner" on
Josh Berkus <josh(a)agliodbs.com> wrote:

> It's undeniable that auto-retry would be better from a user's
> perspective than a user-visible cancel. So if it's *reasonable*
> to implement, I think we should be working on it. I'm also very
> puzzled as to why nobody else wants to even discuss it; it's like
> some wierd blackout.

Well, at least for serializable transactions past the first
statement, you'd need to have the complete *logic* for the
transaction in order to do a retry. Not that this is a bad idea --
our application framework does this automatically -- but unless you
only support this for a transaction which is wrapped up as a
function, I don't see how the database itself could handle it. It
might be *possible* to do it outside of a single-function
transaction in a read committed transaction, but you'd have to be
careful about locks. I remember suggesting automatic query retry
(rather than continuing in a mixed-snapshot mode) for update
conflicts in read committed mode and Tom had objections; you might
want to check the archives for that.

-Kevin

--
Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

From: Greg Stark on
josh, nobody is talking about it because it doesn't make sense. you could
only retry if it was the first query in the transaction and only if you
could prove there were no side-effects outside the database and then you
would have no reason to think the retry would be any more likely to work.

greg

On 1 Mar 2010 22:32, "Josh Berkus" <josh(a)agliodbs.com> wrote:

On 2/28/10 7:12 PM, Robert Haas wrote:
>> However, I'd still like to hear from someone with the requ...
"dead end" as in "too hard to implement"? Or for some other reason?

It's undeniable that auto-retry would be better from a user's
perspective than a user-visible cancel. So if it's *reasonable* to
implement, I think we should be working on it. I'm also very puzzled as
to why nobody else wants to even discuss it; it's like some wierd blackout.

--Josh Berkus


--
Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org)
To make changes to your subs...
From: Tom Lane on
Greg Stark <stark(a)mit.edu> writes:
> josh, nobody is talking about it because it doesn't make sense. you could
> only retry if it was the first query in the transaction and only if you
> could prove there were no side-effects outside the database and then you
> would have no reason to think the retry would be any more likely to work.

But it's hot standby, so there are no data-modifying transactions.
Volatile functions could be a problem, though. A bigger problem is
we might have already shipped partial query results to the client.

I agree it ain't easy, but it might not be completely out of the
question. Definitely won't be happening for 9.0 though.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

From: Robert Haas on
On Mon, Mar 1, 2010 at 5:32 PM, Josh Berkus <josh(a)agliodbs.com> wrote:
> On 2/28/10 7:12 PM, Robert Haas wrote:
>>> However, I'd still like to hear from someone with the requisite
>>> > technical knowledge whether capturing and retrying the current query in
>>> > a query cancel is even possible.
>>
>> I'm not sure who you want to hear from here, but I think that's a dead end.
>
> "dead end" as in "too hard to implement"?  Or for some other reason?

I think it's probably too hard to implement for the extremely limited
set of circumstances in which it can work. See the other responses
for some of the problems. There are others, too. Suppose that the
plan for some particular query is to read a table with a hundred
million records, sort it, and then do whatever with the results.
After reading the first 99 million records, the transaction is
cancelled and we have to start over. Maybe someone will say, fine, no
problem - but it's certainly going to be user-visible. Especially if
we retry more than once.

I think we should focus our efforts initially on reducing the
frequency of spurious cancels. What we're essentially trying to do
here is refute the proposition "the WAL record I just replayed might
change the result of this query". It's possibly equivalent to the
halting problem (and certainly impossibly hard) to refute this
proposition in every case where it is in fact false, but it sounds
like what we have in place right now doesn't come close to doing as
well as can be done.

I just read through the current documentation and it doesn't really
seem to explain very much about how HS decides which queries to kill.
Can someone try to flesh that out a bit? It also uses the term
"buffer cleanup lock", which doesn't seem to be used anywhere else in
the documentation (though it does appear in the source tree, including
README.HOT).

....Robert

--
Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers