From: Robert Baer on
Pieyed Piper wrote:
> On Sat, 26 Jun 2010 09:09:51 -0700, Fred Abse
> <excretatauris(a)invalid.invalid> wrote:
>
>> On Fri, 25 Jun 2010 22:44:23 -0700, Robert Baer wrote:
>>
>>> ??? 'dd' ??? Whazzat?
>> UN*X disk raw read/write utility. Will clone an entire disk, byte-for-byte.
>
>
> That used to be the case, but the new GPT drives are a bit different I
> think you'll find.
>
> And "byte-for-byte" is the wrong claim too. Cylinder-for-cylinder or
> track-for-track is more correct. If you want a copy of a volume, you do
> not care what the file system is. If the copy is true, it will get
> carried.
>
> http://www.justlinux.com/forum/showthread.php?threadid=153121
That is _still_ better than Norton's Ghost so-called "forensic copy".
From: Pieyed Piper on
On Sun, 27 Jun 2010 01:28:28 -0700, Robert Baer <robertbaer(a)localnet.com>
wrote:

>Pieyed Piper wrote:
>> On Sat, 26 Jun 2010 09:09:51 -0700, Fred Abse
>> <excretatauris(a)invalid.invalid> wrote:
>>
>>> On Fri, 25 Jun 2010 22:44:23 -0700, Robert Baer wrote:
>>>
>>>> ??? 'dd' ??? Whazzat?
>>> UN*X disk raw read/write utility. Will clone an entire disk, byte-for-byte.
>>
>>
>> That used to be the case, but the new GPT drives are a bit different I
>> think you'll find.
>>
>> And "byte-for-byte" is the wrong claim too. Cylinder-for-cylinder or
>> track-for-track is more correct. If you want a copy of a volume, you do
>> not care what the file system is. If the copy is true, it will get
>> carried.
>>
>> http://www.justlinux.com/forum/showthread.php?threadid=153121
> That is _still_ better than Norton's Ghost so-called "forensic copy".


dd for windows...

http://www.chrysocome.net/dd

or the win32 image copying utility.

https://wiki.ubuntu.com/Win32DiskImager
From: Grant on
On Sun, 27 Jun 2010 01:20:58 -0700, Robert Baer <robertbaer(a)localnet.com> wrote:

>Grant wrote:
>> On Fri, 25 Jun 2010 22:44:23 -0700, Robert Baer <robertbaer(a)localnet.com> wrote:
>>
>>> Grant wrote:
>>>> On Fri, 25 Jun 2010 16:57:11 -0400, "Michael A. Terrell" <mike.terrell(a)earthlink.net> wrote:
>>>>
>>>>> Grant wrote:
>>>>>> On Thu, 24 Jun 2010 23:40:18 -0400, "Michael A. Terrell" <mike.terrell(a)earthlink.net> wrote:
>>>>>>
>>>>>>> D Yuniskis wrote:
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> A 501(3)c that I am affiliated with received a donation
>>>>>>>> of several hundred ~80G SATA/PATA drives the other day.
>>>>>>>> They have allegedly (?) been bulk erased. I was asked,
>>>>>>>> today, if there is any way to make the drives serviceable,
>>>>>>>> again.
>>>>>>>>
>>>>>>>> I have not seen the drives or had a chance to play with
>>>>>>>> any of them. As "proof" that they were bulk erased, I
>>>>>>>> am told each drive bears a label:
>>>>>>>> ERASED
>>>>>>>> Magnetic data is completely erased.
>>>>>>>> Erased product can't be reused or repaired.
>>>>>>>>
>>>>>>>> When *I* take a drive out of service, I "bulk erase" them
>>>>>>>> (after "electronically" overwriting the existing data) and
>>>>>>>> then subject them to the 500G drop test :> But, I'll admit
>>>>>>>> I have never *tried* to recover data from a drive thusly
>>>>>>>> (ahem) "treated".
>>>>>>>>
>>>>>>>> My initial response to them was "recycle them, they're trash".
>>>>>>>> Was I too hasty?
>>>>>>>>
>>>>>>>> I would imagine all the servo information, low level
>>>>>>>> formatting, bad sector table, etc. are gone or corrupted
>>>>>>>> so putting these back into service would require "special
>>>>>>>> factory tools"...
>>>>>>> There is a software command for newer drives to erase all data.
>>>>>> Yes, but it takes too long to execute for commercial reality?
>>>>> Have you ever tried it?
>>>> No, but the time quoted for secure erase in 'smartctl' report for HDD
>>>> is similar period to what writing zeroes to entire surface would take.
>>>> So I 'dd' zeroes to the HDD instead :)
>>>>
>>>> Grant.
>>> ??? 'dd' ??? Whazzat?
>>
>> Command line utility on unix-like OS (eg. Linux), dd is a generalised
>> data transfer utility. Example, write zeroes to entire HDD:
>>
>> # dd if=/dev/zero bs=1M of=/dev/sdd
>>
>> Decode: # - superuser prompt, if=/dev/zero - input file is endless
>> stream of zeroes, bs=1M - blocksize is 1MB, of=/dev/sdd - output
>> file is a particular HDD treated as one big file; no end condition
>> specified so command runs until output file (the drive) is full.
>>
>> Grant.
> Note the larger the file size, the faster the read or write of that
>data block (as long as one does not over-run free RAM space with that
>I/O block).
> With a measly 1Meg block, it takes LONGER to open the file than to
>read or write it; try 100Megs or so..

Consider that in this case there is no filesystem, simply one linear
file being 80GB in length ;)

In practice there's not much difference between 4k and 1M with dd with
this command. Forgetting to specify blocksize defaults to 512 bytes,
which is very slow, as one invokes rewriting a 4k logical sector at
the OS level.

When wiping a larger drive I will use a bigger block and run a second
process to trigger dd's progress report; in a different terminal run:
'while :; do killall -USR1 dd; sleep 10; done'.

Grant.
--
http://bugs.id.au/
From: Jan Panteltje on
On a sunny day (Sun, 27 Jun 2010 03:00:09 +0100) it happened Nobody
<nobody(a)nowhere.com> wrote in <pan.2010.06.27.01.59.51.890000(a)nowhere.com>:

>On Sat, 26 Jun 2010 09:25:48 -0700, Pieyed Piper wrote:
>
>>>> ??? 'dd' ??? Whazzat?
>>>
>>>UN*X disk raw read/write utility. Will clone an entire disk, byte-for-byte.
>>
>>
>> That used to be the case, but the new GPT drives are a bit different I
>> think you'll find.
>>
>> And "byte-for-byte" is the wrong claim too. Cylinder-for-cylinder or
>> track-for-track is more correct.
>
>Or even more correct, "block-for-block". The main difference between "dd"
>and e.g. "cat" is that "dd" specifically reads/writes blocks of a
>user-defined size. This can matter if the source or destination is a tape
>drive, where each read/write operation reads/writes one block, padding or
>truncating the data as necessary.
>
>In many cases "cat /dev/zero > /dev/sdd" will do just as well; the main
>reason for using dd is that it won't abort on the first error.
>
>But if you want to securely erase a drive, use "hdparm --security-erase".
>That will overwrite remapped sectors, "hidden" areas, etc, while "dd" will
>only overwrite the parts which are "visible". The feature is supported on
>all IDE drives made since ~2001 (at which time the largest available IDE
>drive was 15GB, so any drive larger than that will support it).

I am sure I bought some 40 GB Seagates in 2001
Just found the bill, 1 40GB 5400 rpm IDE ATA 100 04-05-2001
And it is still working!
Seagate is really good.


From: D Yuniskis on
Grant wrote:
> On Sun, 27 Jun 2010 01:20:58 -0700, Robert Baer <robertbaer(a)localnet.com> wrote:
>
>> Grant wrote:
>>> On Fri, 25 Jun 2010 22:44:23 -0700, Robert Baer <robertbaer(a)localnet.com> wrote:
>>>
>>>> Grant wrote:
>>>>> On Fri, 25 Jun 2010 16:57:11 -0400, "Michael A. Terrell" <mike.terrell(a)earthlink.net> wrote:
>>>>>
>>>>>> Grant wrote:
>>>>>>> On Thu, 24 Jun 2010 23:40:18 -0400, "Michael A. Terrell" <mike.terrell(a)earthlink.net> wrote:
>>>>>>>
>>>>>>>> D Yuniskis wrote:
>>>>>>>>> Hi,
>>>>>>>>>
>>>>>>>>> A 501(3)c that I am affiliated with received a donation
>>>>>>>>> of several hundred ~80G SATA/PATA drives the other day.
>>>>>>>>> They have allegedly (?) been bulk erased. I was asked,
>>>>>>>>> today, if there is any way to make the drives serviceable,
>>>>>>>>> again.
>>>>>>>>>
>>>>>>>>> I have not seen the drives or had a chance to play with
>>>>>>>>> any of them. As "proof" that they were bulk erased, I
>>>>>>>>> am told each drive bears a label:
>>>>>>>>> ERASED
>>>>>>>>> Magnetic data is completely erased.
>>>>>>>>> Erased product can't be reused or repaired.
>>>>>>>>>
>>>>>>>>> When *I* take a drive out of service, I "bulk erase" them
>>>>>>>>> (after "electronically" overwriting the existing data) and
>>>>>>>>> then subject them to the 500G drop test :> But, I'll admit
>>>>>>>>> I have never *tried* to recover data from a drive thusly
>>>>>>>>> (ahem) "treated".
>>>>>>>>>
>>>>>>>>> My initial response to them was "recycle them, they're trash".
>>>>>>>>> Was I too hasty?
>>>>>>>>>
>>>>>>>>> I would imagine all the servo information, low level
>>>>>>>>> formatting, bad sector table, etc. are gone or corrupted
>>>>>>>>> so putting these back into service would require "special
>>>>>>>>> factory tools"...
>>>>>>>> There is a software command for newer drives to erase all data.
>>>>>>> Yes, but it takes too long to execute for commercial reality?
>>>>>> Have you ever tried it?
>>>>> No, but the time quoted for secure erase in 'smartctl' report for HDD
>>>>> is similar period to what writing zeroes to entire surface would take.
>>>>> So I 'dd' zeroes to the HDD instead :)
>>>>>
>>>>> Grant.
>>>> ??? 'dd' ??? Whazzat?
>>> Command line utility on unix-like OS (eg. Linux), dd is a generalised
>>> data transfer utility. Example, write zeroes to entire HDD:
>>>
>>> # dd if=/dev/zero bs=1M of=/dev/sdd
>>>
>>> Decode: # - superuser prompt, if=/dev/zero - input file is endless
>>> stream of zeroes, bs=1M - blocksize is 1MB, of=/dev/sdd - output
>>> file is a particular HDD treated as one big file; no end condition
>>> specified so command runs until output file (the drive) is full.
>>>
>>> Grant.
>> Note the larger the file size, the faster the read or write of that
>> data block (as long as one does not over-run free RAM space with that
>> I/O block).

dd(1)'s use of "files" is only in the sense that devices
(including the pseudo-device "zero") reside in the filesystem
as a common namespace. The dd(1) invocation opens the disk
device (raw or block, depending on the minor device number
associated with the name used to access the physical device)
*once* regardless of blocksize.

>> With a measly 1Meg block, it takes LONGER to open the file than to
>> read or write it; try 100Megs or so..

The difference lies in how much is passed to fwrite(3c) in
each invocation (i.e., how much overhead the API incurs
as you push bytes across it)

> Consider that in this case there is no filesystem, simply one linear
> file being 80GB in length ;)

Again, it's not really a *file* but, rather, an interface to
a bdevsw associated with the "disk drive".

> In practice there's not much difference between 4k and 1M with dd with
> this command. Forgetting to specify blocksize defaults to 512 bytes,
> which is very slow, as one invokes rewriting a 4k logical sector at
> the OS level.
>
> When wiping a larger drive I will use a bigger block and run a second
> process to trigger dd's progress report; in a different terminal run:
> 'while :; do killall -USR1 dd; sleep 10; done'.
First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8 9
Prev: Trailer and Power Generator
Next: Bad Resistors