From: Robert Haas on
On Fri, Jul 9, 2010 at 12:03 AM, Mark Kirkwood
<mark.kirkwood(a)catalyst.net.nz> wrote:
> On 09/07/10 15:57, Robert Haas wrote:
>>
>> Hmm. �Well, those numbers seem awfully high, for what you're doing,
>> then. �An admission control mechanism that's just letting everything
>> in shouldn't knock 5% off performance (let alone 30%).
>
> Yeah it does, on the other hand both Josh and I were trying to elicit the
> worst case overhead.

Even so...

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

From: Robert Haas on
On Thu, Jul 8, 2010 at 11:00 PM, Mark Kirkwood
<mark.kirkwood(a)catalyst.net.nz> wrote:
> On 09/07/10 14:26, Robert Haas wrote:
>>
>> On Thu, Jul 8, 2010 at 10:21 PM, Mark Kirkwood
>> <mark.kirkwood(a)catalyst.net.nz> �wrote:
>>
>>>
>>> Purely out of interest, since the old repo is still there, I had a quick
>>> look at measuring the overhead, using 8.4's pgbench to run two custom
>>> scripts: one consisting of a single 'SELECT 1', the other having 100
>>> 'SELECT
>>> 1' - the latter being probably the worst case scenario. Running 1,2,4,8
>>> clients and 1000-10000 tramsactions gives an overhead in the 5-8% range
>>> [1]
>>> (i.e transactions/s decrease by this amount with the scheduler turned on
>>> [2]). While a lot better than 30% (!) it is certainly higher than we'd
>>> like.
>>>
>>
>> Isn't the point here to INCREASE throughput?
>>
>>
>
> LOL - yes it is! Josh wanted to know what the overhead was for the queue
> machinery itself, so I'm running a test to show that (i.e so I have a queue
> with the thresholds set higher than the test will load them).
>
> In the situation where (say) 11 concurrent queries of a certain type make
> your system become unusable, but 10 are fine, then constraining it to have a
> max of 10 will tend to improve throughput. By how much is hard to say, for
> instance preventing the Linux OOM killer shutting postgres down would be
> infinite I guess :-)

Hmm. Well, those numbers seem awfully high, for what you're doing,
then. An admission control mechanism that's just letting everything
in shouldn't knock 5% off performance (let alone 30%).

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

From: "Kevin Grittner" on
Mark Kirkwood <mark.kirkwood(a)catalyst.net.nz> wrote:

> Purely out of interest, since the old repo is still there, I had a
> quick look at measuring the overhead, using 8.4's pgbench to run
> two custom scripts: one consisting of a single 'SELECT 1', the
> other having 100 'SELECT 1' - the latter being probably the worst
> case scenario. Running 1,2,4,8 clients and 1000-10000 transactions
> gives an overhead in the 5-8% range [1] (i.e transactions/s
> decrease by this amount with the scheduler turned on [2]). While a
> lot better than 30% (!) it is certainly higher than we'd like.

Hmmm... In my first benchmarks of the serializable patch I was
likewise stressing a RAM-only run to see how much overhead was added
to a very small database transaction, and wound up with about 8%.
By profiling where the time was going with and without the patch,
I narrowed it down to lock contention. Reworking my LW locking
strategy brought it down to 1.8%. I'd bet there's room for similar
improvement in the "active transaction" limit you describe. In fact,
if you could bring the code inside blocks of code already covered by
locks, I would think you could get it down to where it would be hard
to find in the noise.

-Kevin

--
Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

From: Mark Kirkwood on
On 10/07/10 03:54, Kevin Grittner wrote:
> Mark Kirkwood<mark.kirkwood(a)catalyst.net.nz> wrote:
>
>
>> Purely out of interest, since the old repo is still there, I had a
>> quick look at measuring the overhead, using 8.4's pgbench to run
>> two custom scripts: one consisting of a single 'SELECT 1', the
>> other having 100 'SELECT 1' - the latter being probably the worst
>> case scenario. Running 1,2,4,8 clients and 1000-10000 transactions
>> gives an overhead in the 5-8% range [1] (i.e transactions/s
>> decrease by this amount with the scheduler turned on [2]). While a
>> lot better than 30% (!) it is certainly higher than we'd like.
>>
>
> Hmmm... In my first benchmarks of the serializable patch I was
> likewise stressing a RAM-only run to see how much overhead was added
> to a very small database transaction, and wound up with about 8%.
> By profiling where the time was going with and without the patch,
> I narrowed it down to lock contention. Reworking my LW locking
> strategy brought it down to 1.8%. I'd bet there's room for similar
> improvement in the "active transaction" limit you describe. In fact,
> if you could bring the code inside blocks of code already covered by
> locks, I would think you could get it down to where it would be hard
> to find in the noise.
>
>

Yeah, excellent suggestion - I suspect there is room for considerable
optimization along the lines you suggest... at the time the focus was
heavily biased toward a purely DW workload where the overhead vanished
against large plan and execute times, but this could be revisited.
Having said that I suspect a re-architect is needed for a more
wideranging solution suitable for Postgres (as opposed to Bizgres or
Greenplum)

Cheers

Mark

--
Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers