From: Heikki Linnakangas on
Jeff Davis wrote:
> On Mon, 2007-03-05 at 15:30 -0500, Tom Lane wrote:
>> Jeff Davis <pgsql(a)j-davis.com> writes:
>>> Absolutely. I've got a parameter in my patch "sync_scan_offset" that
>>> starts a seq scan N pages before the position of the last seq scan
>>> running on that table (or a current seq scan if there's still a scan
>>> going).
>> Strikes me that expressing that parameter as a percentage of
>> shared_buffers might make it less in need of manual tuning ...
>>
>
> The original patch was a percentage of effective_cache_size, because in
> theory it may be helpful to have this parameter larger than shared
> buffers. Synchronized Scannning can take advantage of OS buffer cache as
> well.
>
> Someone convinced me to change it to be an independent variable.
>
> I don't have a strong opinion, but now I have three different opinions:
> (1) Independent parameter
> (2) Percentage of shared_buffers
> (3) Percentage of effective_cache_size

Another approach I proposed back in December is to not have a variable
like that at all, but scan the buffer cache for pages belonging to the
table you're scanning to initialize the scan. Scanning all the
BufferDescs is a fairly CPU and lock heavy operation, but it might be ok
given that we're talking about large I/O bound sequential scans. It
would require no DBA tuning and would work more robustly in varying
conditions. I'm not sure where you would continue after scanning the
in-cache pages. At the highest in-cache block number, perhaps.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?

http://archives.postgresql.org

From: Mark Kirkwood on
Tom Lane wrote:

> So the
> problem is not so much the clock sweep overhead as that it's paid in a
> very nonuniform fashion: with N buffers you pay O(N) once every N reads
> and O(1) the rest of the time. This is no doubt slowing things down
> enough to delay that one read, instead of leaving it nicely I/O bound
> all the time. Mark, can you detect "hiccups" in the read rate using
> your setup?
>

I think so, here's the vmstat output for 400MB of shared_buffers during
the scan:

procs -----------memory---------- ---swap-- -----io---- --system--
----cpu----
r b swpd free buff cache si so bi bo in cs us
sy id wa
1 0 764 51772 0 1990688 0 0 120422 2 1546 1755 16
37 46 1
1 0 764 53640 0 1988792 0 0 120422 2 1544 1446 14
40 46 1
1 0 788 54900 0 1987564 0 0 116746 15 1470 3067 15
39 44 2
1 0 788 52800 0 1989552 0 0 119199 20 1488 2216 14
37 47 1
1 0 788 52372 0 1990000 0 0 122880 7 1532 1203 15
39 45 1
1 0 788 54592 0 1987872 0 5 124928 5 1557 1058 17
38 46 0
2 0 788 54052 0 1987836 0 0 118787 0 1500 2469 16
36 47 1
1 0 788 52552 0 1989892 0 0 120419 0 1506 2531 15
36 48 1
1 0 788 53452 0 1989356 0 0 119195 2 1501 1698 15
37 47 1
1 0 788 52680 0 1989796 0 0 120424 2 1521 1610 16
37 47 1


Cheers

Mark




---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster

From: Tom Lane on
Mark Kirkwood <markir(a)paradise.net.nz> writes:
> Tom Lane wrote:
>> Mark, can you detect "hiccups" in the read rate using
>> your setup?

> I think so, here's the vmstat output for 400MB of shared_buffers during
> the scan:

Hm, not really a smoking gun there. But just for grins, would you try
this patch and see if the numbers change?

regards, tom lane

From: Mark Kirkwood on
Tom Lane wrote:

>
> Hm, not really a smoking gun there. But just for grins, would you try
> this patch and see if the numbers change?
>

Applied to 8.2.3 (don't have lineitem loaded in HEAD yet) - no change
that I can see:

procs -----------memory---------- ---swap-- -----io---- --system--
----cpu----
r b swpd free buff cache si so bi bo in cs us
sy id wa
1 0 764 55300 0 1988032 0 0 118797 32 1532 1775 15
39 44 2
1 0 764 54476 0 1989720 0 0 115507 9 1456 3970 15
39 45 1
1 0 788 54540 0 1989592 0 0 121651 0 1508 3221 16
37 47 0
1 0 788 52808 0 1991320 0 0 124109 0 1532 1236 15
38 46 0
1 0 788 52504 0 1991784 0 0 124518 0 1547 1005 16
39 45 0
2 0 788 54544 0 1989740 0 5 117965 5 1491 2012 15
36 47 2
1 0 788 53596 0 1991184 0 0 120424 0 1504 1910 16
37 46 1

Elapsed time is exactly the same (101 s). Is is expected that HEAD would
behave differently?

Cheers

Mark

---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings

From: Tom Lane on
Mark Kirkwood <markir(a)paradise.net.nz> writes:
> Elapsed time is exactly the same (101 s). Is is expected that HEAD would
> behave differently?

Offhand I don't think so. But what I wanted to see was the curve of
elapsed time vs shared_buffers?

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate

First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
Prev: xlogViewer / xlogdump
Next: CVS corruption/mistagging?