Prev: Firefox/DNS problem, possibly due to two ADSL lines, advice needed
Next: spamassassin with thunderbird
From: jasee on 6 Dec 2009 03:04 Richard Kettlewell wrote: > Paul Martin <pm(a)nowster.org.uk> writes: >> jasee wrote: > >>> THe main problem with any of these facilities is the slowness; >>> which is exactly what you don't want with disk recovery. dd disk >>> access is apparently 100 times slower than normal disk access and >>> from what I've done I can >> >> Set a block size. The default is to read a character at a >> time. That's inefficient. > > Experimentally the default for coreutils dd is to be 512 bytes at a > time, although if this is documented I don't immediately see where. > > For cached data (including for block devices) larger block sizes do > significantly improve performance. > > bs=512 240MB/s > bs=1024 440MB/s > bs=2048 770MB/s > bs=4096 1.3GB/s > bs=8192 1.8GB/s > > The obvious interpretation is that the cost of system calls dominates. I've used dd before and it's never been anywhere near that quick, but i've always used it on damaged disks. > > I'd expect that IO would dominate for uncached data, but haven't done > the experiment. I would say it does in this case as although the cpu is being very little used (according to the task manager I have here) it's extremely slow to open a file manager or gparted for instance and show contents. I'm now using dd_rescue which uses a variable 'hard' block size, if I set that above 1024, I loose data. But it's a little faster than dd and I can start where I want, At the present rate, if it doesn't go down any more, for about 7gigabytes it would take about 12 days, however I started at 5gigabytes so it should 'only' take about 5 days. dd, ddrescue and dd_rescue are all very good, but in practice I don't think they're really practical at all for modern hard disks. The partition I'm recovering is tiny (14gbyes) compared to the sizes of modern hard disks.
From: alexd on 6 Dec 2009 11:54 Meanwhile, at the uk.comp.os.linux Job Justification Hearings, jasee chose the tried and tested strategy of: > I think it may be because the partition is marked as dirty (it's ntfs) so > can't be mounted. Usually Linux is more tolerant than NT in this respect.* I think you'll find 'man mount.ntfs' [or mount.ntfs-3g] to be of some use here. -- <http://ale.cx/> (AIM:troffasky) (UnSoEsNpEaTm(a)ale.cx) 16:53:34 up 8 days, 20:43, 7 users, load average: 0.19, 0.12, 0.09 Plant food is a made up drug
From: jasee on 7 Dec 2009 04:14 alexd wrote: > Meanwhile, at the uk.comp.os.linux Job Justification Hearings, jasee > chose the tried and tested strategy of: > >> I think it may be because the partition is marked as dirty (it's >> ntfs) so can't be mounted. Usually Linux is more tolerant than NT in >> this respect.* > > I think you'll find 'man mount.ntfs' [or mount.ntfs-3g] to be of some > use here. Thanks, man mount.ntfs doesn't exit on this cd :-( Are you suggesting using ntfs-3g (supposed to be very good I know with full read write access to ntfs volumes) recover option (no thanks!) or norecover? The latter similar to force in mount.ntfs.
From: Theo Markettos on 7 Dec 2009 08:56 jasee <jasee(a)btinternet.com> wrote: > Maybe, but if you're recovering an ntfs partition then having a block of > zeros here or there will probably confuse the life out of NT's NTFS (chkdsk) > :-) It may be worth looking at ntfsclone instead - it knows about the NTFS filesystem structure. This might come in handy: --rescue Ignore disk read errors so disks having bad sectors, e.g. dying disks, can be rescued the most efficiently way, with minimal stress on them. Ntfsclone works at the lowest, sector level in this mode too thus more data can be rescued. The contents of the unreadable sectors are filled by character '?' and the beginning of such sectors are marked by "BadSectoR\0". Since it ignores free sectors, it doesn't need to spend time trying to recover sectors that aren't actually useful to anyone (apart from for undelete, forensics etc). > THe main problem with any of these facilities is the slowness; which is > exactly what you don't want with disk recovery. dd disk access is apparently > 100 times slower than normal disk access and from what I've done I can > imagine that is correct. Really? dd uses sector access, which should be about as quick as you can get. Or do you mean it slows down a lot when it needs to retry? Theo
From: jasee on 7 Dec 2009 10:07
Theo Markettos wrote: >> jasee <jasee(a)btinternet.com> wrote: >>> Maybe, but if you're recovering an ntfs partition then having a >>> block of zeros here or there will probably confuse the life out of >>> NT's NTFS (chkdsk) :-) >> >> It may be worth looking at ntfsclone instead - it knows about the >> NTFS filesystem structure. This might come in handy: >> >> --rescue >> Ignore disk read errors so disks having bad sectors, e.g. dying >> disks, can be rescued the most efficiently way, with minimal stress >> on them. Ntfsclone works at the lowest, sector level in this mode >> too thus more data can be rescued. The contents of the unreadable >> sectors are filled by character '?' and the beginning of such >> sectors are marked by "BadSectoR\0". Yes maybe thanks, in future. (Doesn't exist on this live cd) >> >> >> Since it ignores free sectors, it doesn't need to spend time trying >> to recover sectors that aren't actually useful to anyone (apart from >> for undelete, forensics etc). >> >>> THe main problem with any of these facilities is the slowness; >>> which is exactly what you don't want with disk recovery. dd disk >>> access is apparently 100 times slower than normal disk access and >>> from what I've done I can imagine that is correct. >> >> Really? dd uses sector access, which should be about as quick as >> you can get. Or do you mean it slows down a lot when it needs to >> retry? > This is what I've heard (100 times) and from my experience this time it certainly is, even without the errors I'm getting this time. |