From: Greg Stark on 8 Jan 2010 20:14 On Fri, Jan 8, 2010 at 7:36 PM, Joachim Wieland <joe(a)mcknight.de> wrote: > * If all four pg_synchronize_snapshot_taken() calls return true and the > If we must have a timeout I think you should throw an error if the timeout expires. -- greg -- Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
From: Markus Wanner on 9 Jan 2010 14:37 Hi Joachim Wieland wrote: > Since nobody objected to the idea in general, I have implemented it. Great! I hope to get some spare cycles within the next few days to review it. Regards Markus Wanner -- Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
From: =?utf-8?Q?Marcin_Ma=C5=84k?= on 9 Jan 2010 15:44 Dnia 2010-01-09 o godz. 20:37 Markus Wanner <markus(a)bluegap.ch> napisaÅ (a): > Hi > > Joachim Wieland wrote: >> Since nobody objected to the idea in general, I have implemented it. How cool it would be if we could synchronize snapshots between the master and the (sr) standby? The connection poolers could use that to send read-only queries to the standby, and when the first dml/ddl statement in a transaction comes up, they could switch to the master. If it is hard to tell from the statement if it writes anything, the pooler could catch the error, and retry on the master Regards Marcin MaÅk -- Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
From: Simon Riggs on 10 Jan 2010 06:28 On Fri, 2010-01-08 at 20:36 +0100, Joachim Wieland wrote: > The attached patch implements the idea of Heikki / Simon published in > > http://archives.postgresql.org/pgsql-hackers/2009-11/msg00271.php > > Since nobody objected to the idea in general, I have implemented it. > > As this is not currently used anywhere it doesn't give immediate benefit, it > is however a prerequisite for a parallel version of pg_dump that quite some > people (including myself) seem to be interested in. I'm interested in this, but realistically won't have time to review this personally in this release. Sorry about that. -- Simon Riggs www.2ndQuadrant.com -- Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
From: Markus Wanner on 5 Feb 2010 12:29 Hello Joachim, a little daughter eats lots of spare cycles - among other things. Sorry it took that long to review. On Fri, 8 Jan 2010 20:36:44 +0100, Joachim Wieland <joe(a)mcknight.de> wrote: > The attached patch implements the idea of Heikki / Simon published in > > http://archives.postgresql.org/pgsql-hackers/2009-11/msg00271.php I must admit I didn't read that up front, but thought your patch could be useful for implementing parallel querying. So, let's first concentrate on the intended use case: allowing parallel pg_dump. To me it seems like a pragmatic and quick solution, however, I'm not sure if requiring superuser privileges is acceptable. The patch currently compiles (modulo some OID changes in pg_proc.h to prevent duplicates) and the test suite runs through fine. I haven't tested the new functions, though. Reading the code, I'm missing the part that actually acquires the snapshot for the transaction(s). After setting up multiple transactions with pg_synchronize_snapshot and pg_synchronize_snapshot_taken, they still don't have a snapshot, do they? Also, you should probably ensure the calling transactions don't have a snapshot already (let alone a transaction id). In a similar vein, and answering your question in a comment: yes, I'd say you want to ensure your transactions are in SERIALIZABLE isolation mode. There's no other isolation level for which that kind of snapshot serialization makes sense, is there? Using the exposed functions in a more general sense, I think it's important to note that the patch only intents to synchronize snapshots at the start of the transaction, not contiguously. Thus, normal transaction isolation applies for concurrent writes and each of the transactions can commit or rollback independently. The timeout is nice, but is it really required? Isn't the normal query cancellation infrastructure sufficient? Hope that helps. Thanks for working on this issue. Regards Markus Wanner -- Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
|
Next
|
Last
Pages: 1 2 Prev: Streaming replication status Next: [COMMITTERS] pgsql: Tidy up and refactor plperl.c. |