From: Mark Kirkwood on
On 09/07/10 05:10, Josh Berkus wrote:
> Simon, Mark,
>
>> Actually only 1 lock check per query, but certainly extra processing and
>> data structures to maintain the pool information... so, yes certainly
>> much more suitable for DW (AFAIK we never attempted to measure the
>> additional overhead for non DW workload).
>
> I recall testing it when the patch was submitted for 8.2., and the
> overhead was substantial in the worst case ... like 30% for an
> in-memory one-liner workload.
>

Interesting - quite high! However I recall you tested the initial
committed version, later additions dramatically reduced the overhead
(what is in the Bizgres repo *now* is the latest).

> I've been going over the greenplum docs and it looks like the attempt
> to ration work_mem was dropped. At this point, Greenplum 3.3 only
> rations by # of concurrent queries and total cost. I know that
> work_mem rationing was in the original plans; what made that unworkable?
>

That certainly was my understanding too. I left Greenplum about the time
this was being discussed, and I think the other staff member involved
with the design left soon afterwards as well, which might have been a
factor!

> My argument in general is that in the general case ... where you can't
> count on a majority of long-running queries ... any kind of admission
> control or resource management is a hard problem (if it weren't,
> Oracle would have had it before 11). I think that we'll need to
> tackle it, but I don't expect the first patches we make to be even
> remotely usable. It's definitely not an SOC project.
>
> I should write more about this.
>

+1

Cheers

Mark


--
Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

From: Mark Kirkwood on
On 09/07/10 12:58, Mark Kirkwood wrote:
> On 09/07/10 05:10, Josh Berkus wrote:
>> Simon, Mark,
>>
>>> Actually only 1 lock check per query, but certainly extra processing
>>> and
>>> data structures to maintain the pool information... so, yes certainly
>>> much more suitable for DW (AFAIK we never attempted to measure the
>>> additional overhead for non DW workload).
>>
>> I recall testing it when the patch was submitted for 8.2., and the
>> overhead was substantial in the worst case ... like 30% for an
>> in-memory one-liner workload.
>>
>
> Interesting - quite high! However I recall you tested the initial
> committed version, later additions dramatically reduced the overhead
> (what is in the Bizgres repo *now* is the latest).

Purely out of interest, since the old repo is still there, I had a quick
look at measuring the overhead, using 8.4's pgbench to run two custom
scripts: one consisting of a single 'SELECT 1', the other having 100
'SELECT 1' - the latter being probably the worst case scenario. Running
1,2,4,8 clients and 1000-10000 tramsactions gives an overhead in the
5-8% range [1] (i.e transactions/s decrease by this amount with the
scheduler turned on [2]). While a lot better than 30% (!) it is
certainly higher than we'd like.


Cheers

Mark

[1] I got the same range for pgbench select-only using its usual workload
[2] As compared to Bizgres(8.2.4) and also standard Postgres 8.2.12.

--
Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

From: Robert Haas on
On Thu, Jul 8, 2010 at 10:21 PM, Mark Kirkwood
<mark.kirkwood(a)catalyst.net.nz> wrote:
> Purely out of interest, since the old repo is still there, I had a quick
> look at measuring the overhead, using 8.4's pgbench to run two custom
> scripts: one consisting of a single 'SELECT 1', the other having 100 'SELECT
> 1' - the latter being probably the worst case scenario. Running 1,2,4,8
> clients and 1000-10000 tramsactions gives an overhead in the 5-8% range [1]
> (i.e transactions/s decrease by this amount with the scheduler turned on
> [2]). While a lot better than 30% (!) it is certainly higher than we'd like.

Isn't the point here to INCREASE throughput?

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

From: Mark Kirkwood on
On 09/07/10 14:26, Robert Haas wrote:
> On Thu, Jul 8, 2010 at 10:21 PM, Mark Kirkwood
> <mark.kirkwood(a)catalyst.net.nz> wrote:
>
>> Purely out of interest, since the old repo is still there, I had a quick
>> look at measuring the overhead, using 8.4's pgbench to run two custom
>> scripts: one consisting of a single 'SELECT 1', the other having 100 'SELECT
>> 1' - the latter being probably the worst case scenario. Running 1,2,4,8
>> clients and 1000-10000 tramsactions gives an overhead in the 5-8% range [1]
>> (i.e transactions/s decrease by this amount with the scheduler turned on
>> [2]). While a lot better than 30% (!) it is certainly higher than we'd like.
>>
> Isn't the point here to INCREASE throughput?
>
>

LOL - yes it is! Josh wanted to know what the overhead was for the queue
machinery itself, so I'm running a test to show that (i.e so I have a
queue with the thresholds set higher than the test will load them).

In the situation where (say) 11 concurrent queries of a certain type
make your system become unusable, but 10 are fine, then constraining it
to have a max of 10 will tend to improve throughput. By how much is hard
to say, for instance preventing the Linux OOM killer shutting postgres
down would be infinite I guess :-)

Cheers

Mark

--
Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

From: Mark Kirkwood on
On 09/07/10 15:57, Robert Haas wrote:
> Hmm. Well, those numbers seem awfully high, for what you're doing,
> then. An admission control mechanism that's just letting everything
> in shouldn't knock 5% off performance (let alone 30%).
>
>

Yeah it does, on the other hand both Josh and I were trying to elicit
the worst case overhead.

Cheers

Mark

--
Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers