From: "Kevin Grittner" on
Tom Lane <tgl(a)sss.pgh.pa.us> wrote:

> It might be better to try a test case with lighter-weight objects,
> say 5 million simple functions.

So the current database is expendable? I'd just as soon delete it
before creating the other one, if you're fairly confident the other
one will do it.

-Kevin

--
Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

From: Tom Lane on
"Kevin Grittner" <Kevin.Grittner(a)wicourts.gov> writes:
> Tom Lane <tgl(a)sss.pgh.pa.us> wrote:
>> It might be better to try a test case with lighter-weight objects,
>> say 5 million simple functions.

> So the current database is expendable?

Yeah, I think it was a bad experimental design anyway...

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

From: "Kevin Grittner" on
Tom Lane <tgl(a)sss.pgh.pa.us> wrote:

> It might be better to try a test case with lighter-weight objects,
> say 5 million simple functions.

A dump of that quickly settled into running a series of these:

SELECT proretset, prosrc, probin,
pg_catalog.pg_get_function_arguments(oid) AS funcargs,
pg_catalog.pg_get_fun
ction_identity_arguments(oid) AS funciargs,
pg_catalog.pg_get_function_result(oid) AS funcresult, proiswindow,
provolatile, proisstrict, prosecdef, proconfig, procost, prorows, (S
ELECT lanname FROM pg_catalog.pg_language WHERE oid = prolang) AS
lanname FROM pg_catalog.pg_proc WHERE oid =
'1404528'::pg_catalog.oid

(with different oid values, of course).

Is this before or after the point you were worried about. Anything
in particular for which I should be alert?

-Kevin

--
Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

From: "Kevin Grittner" on
Tom Lane <tgl(a)sss.pgh.pa.us> wrote:

> It might be better to try a test case with lighter-weight objects,
> say 5 million simple functions.

Said dump ran in about 45 minutes with no obvious stalls or
problems. The 2.2 GB database dumped to a 1.1 GB text file, which
was a little bit of a surprise.

-Kevin

--
Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

From: Tom Lane on
"Kevin Grittner" <Kevin.Grittner(a)wicourts.gov> writes:
> Tom Lane <tgl(a)sss.pgh.pa.us> wrote:
>> It might be better to try a test case with lighter-weight objects,
>> say 5 million simple functions.

> Said dump ran in about 45 minutes with no obvious stalls or
> problems. The 2.2 GB database dumped to a 1.1 GB text file, which
> was a little bit of a surprise.

Did you happen to notice anything about pg_dump's memory consumption?
For an all-DDL case like this, I'd sort of expect the memory usage to
be comparable to the output file size.

Anyway this seems to suggest that we don't have any huge problem with
large numbers of archive TOC objects, so the next step probably is to
look at how big a code change it would be to switch over to
TOC-per-blob.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers