From: "Kevin Grittner" on
Markus Wanner <markus(a)bluegap.ch> wrote:

> (I don't dare to add these patches to the commit fest, as this
> refactoring doesn't have any immediate benefit for Postgres
> itself, at the moment.)

You could submit them as Work In Progress patches....

-Kevin

--
Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

From: Markus Wanner on
Hi,

On 07/13/2010 08:45 PM, Kevin Grittner wrote:
> You could submit them as Work In Progress patches....

Okay, I added them. I guess they get more attention that way.

Regards

Markus

--
Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

From: Dimitri Fontaine on
Hi,

We've been talking about this topic on -performance:

Markus Wanner <markus(a)bluegap.ch> writes:
> I've combined these two components into a single, general purpose background
> worker infrastructure component, which is now capable to serve autovacuum as
> well as Postgres-R. And it might be of use for other purposes as well, most
> prominently parallel query processing. Basically anything that needs a
> backend connected to a database to do any kind of background processing,
> possibly parallelized.

Magnus Hagander <magnus(a)hagander.net> writes:
> On Tue, Jul 13, 2010 at 16:42, Dimitri Fontaine <dfontaine(a)hi-media.com> wrote:
>> So a supervisor daemon with a supervisor API that would have to support
>> autovacuum as a use case, then things like pgagent, PGQ and pgbouncer,
>> would be very welcome.
>>
>> What about starting a new thread about that? Or you already know you
>> won't want to push the extensibility of PostgreSQL there?
>
> +1 on this idea in general, if we can think up a good API - this seems
> very useful to me, and you have some good examples there of cases
> where it'd definitely be a help.

So, do you think we could use your work as a base for allowing custom
daemon code? I guess we need to think about how to separate external
code and internal code, so a second layer could be necessary here.

As far as the API goes, I have several ideas but nothing that I have
already implemented, so I'd prefer to follow Markus there :)

Regards,
--
dim

--
Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

From: Markus Wanner on
Hi,

On 07/15/2010 03:45 PM, Dimitri Fontaine wrote:
> We've been talking about this topic on -performance:

Thank for pointing out this discussion, I'm not following -performance
too closely.

> So, do you think we could use your work as a base for allowing custom
> daemon code?

Daemon code? That sounds like it could be an addition to the
coordinator, which I'm somewhat hesitant to extend, as it's a pretty
critical process (especially for Postgres-R).

With the step3, which adds support for sockets, you can use the
coordinator to listen on pretty much any kind of socket you want. That
might be helpful in some cases (just as it is required for connecting to
the GCS).

However, note that the coordinator is designed to be just a message
passing or routing process, which should not do any kind of time
consuming processing. It must *coordinate* things (well, jobs) and react
promptly. Nothing else.

On the other side, the background workers have a connection to exactly
one database. They are supposed to do work on that database.

> I guess we need to think about how to separate external
> code and internal code, so a second layer could be necessary here.

The background workers can easily load external libraries - just as a
normal backend can with LOAD. That would also provide better
encapsulation (i.e. an error would only tear down that backend, not the
coordinator). You'd certainly have to communicate between the
coordinator and the background worker. I'm not sure how match that fits
your use case.

The thread on -performance is talking quite a bit about connection
pooling. The only way I can imagine some sort of connection pooling to
be implemented on top of bgworkers would be to let the coordinator
listen on an additional port and pass on all requests to the bgworkers
as jobs (using imessages). And of course send back the responses to the
client. I'm not sure how that overhead compares to using pgpool or
pgbouncer. Those are also separate processes through which all of your
data must flow. They use plain system sockets, imessages use signals and
shared memory.

I don't know enough about the pgagent or PgQ use cases to comment,
sorry. Hope that's helpful, anyway.

Regards

Markus

--
Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

From: Jaime Casanova on
On Thu, Jul 15, 2010 at 1:28 PM, Markus Wanner <markus(a)bluegap.ch> wrote:
>
> However, note that the coordinator is designed to be just a message
> passing or routing process, which should not do any kind of time
> consuming processing. It must *coordinate* things (well, jobs) and react
> promptly. Nothing else.
>

so, merging this with the autovacuum will drop our hopes of having a
time based autovacuum? not that i'm working on that nor i was thinking
on working on that... just asking to know what the implications are,
and what the future improves could be if we go this route

--
Jaime Casanova         www.2ndQuadrant.com
Soporte y capacitación de PostgreSQL

--
Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers