From: "Kevin Grittner" on
Jesper Krogh <jesper(a)krogh.cc> wrote:

> I have not hit any issues with the work_mem being too high, but
> I'm absolutely sure that I could flood the system if they happened
> to be working at the same time.

OK, now that I understand your workload, I agree that a connection
pool at the transaction level won't do you much good. Something
which limited the concurrent *query* count, or an execution
admission controller based on resource usage, could save you from
occasional random incidents of resource over-usage.

-Kevin

--
Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

From: Mark Kirkwood on
On 29/06/10 04:48, Tom Lane wrote:
> "Ross J. Reedstrom"<reedstrm(a)rice.edu> writes:
>
>> Hmm, I'm suddenly struck by the idea of having a max_cost parameter,
>> that refuses to run (or delays?) queries that have "too high" a cost.
>>
> That's been suggested before, and shot down on the grounds that the
> planner's cost estimates are not trustworthy enough to rely on for
> purposes of outright-failing a query. If you didn't want random
> unexpected failures, you'd have to set the limit so much higher than
> your regular queries cost that it'd be pretty much useless.
>
>

I wrote something along the lines of this for Greenplum (is probably
still available in the Bizgres cvs). Yes, cost is not an ideal metric to
use for bounding workload (but was perhaps better than nothing at all in
the case it was intended for).

One difficulty with looking at things from the statement cost point of
view is that all the requisite locks are already taken by the time you
have a plan - so if you delay execution, these are still held, so
deadlock likelihood is increased (unless you release locks for waiters,
and retry for them later - but possibly need to restart executor from
scratch to cope with possible table or schema changes).

> Maybe it'd be all right if it were just used to delay launching the
> query a bit, but I'm not entirely sure I see the point of that.
>

I recall handling this by having a configurable option to let these
queries run if nothing else was. Clearly to have this option on you
would have to be confident that no single query could bring the system down.

Cheers

Mark

--
Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

From: Simon Riggs on
On Fri, 2010-06-25 at 13:10 -0700, Josh Berkus wrote:

> The problem with centralized resource control

We should talk about the problem of lack of centralized resource control
as well, to balance.

Another well observed problem is that work_mem is user settable, so many
programs acting together with high work_mem can cause swapping.

The reality is that inefficient resource control leads to about 50%
resource wastage.

--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Training and Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

From: Mark Kirkwood on
On 29/06/10 05:36, Josh Berkus wrote:
>
> Having tinkered with it, I'll tell you that (2) is actually a very
> hard problem, so any solution we implement should delay as long as
> possible in implementing (2). In the case of Greenplum, what Mark did
> originally IIRC was to check against the global memory pool for each
> work_mem allocation. This often resulted in 100's of global locking
> checks per query ... like I said, feasible for DW, not for OLTP.

Actually only 1 lock check per query, but certainly extra processing and
data structures to maintain the pool information... so, yes certainly
much more suitable for DW (AFAIK we never attempted to measure the
additional overhead for non DW workload).

Cheers

Mark

--
Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

From: Josh Berkus on
Simon, Mark,

> Actually only 1 lock check per query, but certainly extra processing and
> data structures to maintain the pool information... so, yes certainly
> much more suitable for DW (AFAIK we never attempted to measure the
> additional overhead for non DW workload).

I recall testing it when the patch was submitted for 8.2., and the
overhead was substantial in the worst case ... like 30% for an in-memory
one-liner workload.

I've been going over the greenplum docs and it looks like the attempt
to ration work_mem was dropped. At this point, Greenplum 3.3 only
rations by # of concurrent queries and total cost. I know that work_mem
rationing was in the original plans; what made that unworkable?

My argument in general is that in the general case ... where you can't
count on a majority of long-running queries ... any kind of admission
control or resource management is a hard problem (if it weren't, Oracle
would have had it before 11). I think that we'll need to tackle it, but
I don't expect the first patches we make to be even remotely usable.
It's definitely not an SOC project.

I should write more about this.

--
-- Josh Berkus
PostgreSQL Experts Inc.
http://www.pgexperts.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers