From: Ryan Chan on
I was a little bit confused, as in journal filesystem, the data is
always written in journal. But pdflush flush dirty page periodically
from memory to filesystem.

Anyone can offer a better explaination, e.g. relationship, steps of
data being written from program to disk.

Thanks.
From: Grant on
On Tue, 26 Jan 2010 06:18:41 -0800 (PST), Ryan Chan <ryanchan404(a)gmail.com> wrote:

>I was a little bit confused, as in journal filesystem, the data is
>always written in journal. But pdflush flush dirty page periodically
>from memory to filesystem.
>
>Anyone can offer a better explaination, e.g. relationship, steps of
>data being written from program to disk.

The idea of delayed flushing of dirty memory to disk is so that file
I/O may be coalesced into groups of disk access for greater efficiency.

Gains are made by reducing disk seek time relative to disk data I/O time.

Grant.
--
http://bugs.id.au/
From: Ryan Chan on
On 1月27日, 上午4時17分, Grant <g_r_a_n...(a)bugs.id.au> wrote:
>
> The idea of delayed flushing of dirty memory to disk is so that file
> I/O may be coalesced into groups of disk access for greater efficiency.  
>
> Gains are made by reducing disk seek time relative to disk data I/O time.
>


Hi,

If this is the case, will there be chance of data loss?
And seems this is opposite to the journal filesystem?

Thanks.
From: John Hasler on
Ryan Chan writes:
> If this is the case, will there be chance of data loss?

Sure. To eliminate it you will have to turn off delayed flushing, at
considerable cost in performance. High-integrity database servers
sometimes do this.

> And seems this is opposite to the journal filesystem?

The primary goal of most journaling filesystems is to assure
consistency, not data integrity.
--
John Hasler
jhasler(a)newsguy.com
Dancing Horse Hill
Elmwood, WI USA
From: Grant on
On Wed, 27 Jan 2010 05:51:26 -0800 (PST), Ryan Chan <ryanchan404(a)gmail.com> wrote:

>On 1?27?, ??4?17?, Grant <g_r_a_n...(a)bugs.id.au> wrote:
>>
>> The idea of delayed flushing of dirty memory to disk is so that file
>> I/O may be coalesced into groups of disk access for greater efficiency.  
>>
>> Gains are made by reducing disk seek time relative to disk data I/O time.
>>
>
>
>Hi,
>
>If this is the case, will there be chance of data loss?

Yes, unexpected power loss means you'll lose what's in memory.

>And seems this is opposite to the journal filesystem?

Not as such, AFAIK the journal is so the stuff on disk make sense
next time you mount filesystem. Journal replay will chop off any
useless (partially written ) data that didn't completely make it to
the disk so you have a good filesystem.

I agree with John H's response.

Grant.
--
http://bugs.id.au/