From: Aaron Toponce on
On 07/15/2010 06:45 PM, Jordon Bedwell wrote:
> Anything, and I repeat anything, is recoverable, even if you remove the
> filesystem you can recover pieces of the file.

[citation needed]

When you do a low-level write to the disk, you're wiping out anything
and everything. One single pass of zeroes, and not a single hard drive
recovery company on the planet will be willing to attempt a recovery of
your data. It's gone. Two writes, if you're ultra paranoid. Any
additional writes, and you're just wasting your time.

Further, if you physically damage the disks even the slightest, by
bending them, drilling holes, exposing them to high degrees of heat,
etc, again, not a single hard drive recovery company on the planet will
make the attempt. It's not worth their time. It's not worth your money.

> You can remove remnants
> of the file using over write methods but you need to make sure they
> properly implement the algorithm and do your own research on the
> algorithms to make sure they were designed or were updated for modern
> hard drives. EXP: Gutmann method was designed for older HD's and will
> not work on newer HD's most of the time (depending on who implements
> it).

With any modern hard drive that implements an RLL encoding algorithm
since the mid-1990s, can be securely erased with a single pass of
zeroes. The bit alignments are too accurate to leave the fragments that
Gutmann mentions in his paper, that microscopes can pick up. Now with
perpendicular bit encoding, and the areal density of disk platters,
there's just no room for fragmentation. Each bit gets written exactly in
the same place it did before. This wasn't the case with MFM encoding
(pre-1990 drives).

> Now, removing remnants of the file doesn't make it unrecoverable
> (in all circumstances), you might be able to still do a very low level
> recovery, something they would generally reserve for say, a RICO
> investigation, terrorists an those sorts. The only way to stop any and
> all data leaks, recoveries or anything of the sort is to either Degauss,
> Destroy or use Encryption on the drive from the get go and to be honest,

No, not really. Encryption is definitely good enough, and erasing only
the first and last gigabyte or so with random data, will destroy any
clues about using encryption on the disk. As far as the investigator
would be concerned, the whale disk was just overwritten with random
data, which creates perfect deniability.

> the only proper implementation of drive encryption (beyond the actual
> encryption) would be RedHat (and this is only because they offer the
> ability to span encryption across multiple drives and recommend it) and
> no drive encryption (beyond truecrypt) offers deniability.

[citation needed]

As far as I know, RHEL isn't doing anything special beyond LUKS and
dm-crypt, which is available in Debian and just about every other
GNU/Linux-based operating system. And, as mentioned above, it's trivial
to create deniability with any encrypted disk.

> Something
> I've brought up on both Debian and Ubuntu and even to Redhat. As a
> matter of fact, Ubuntu developers fought with me over the idea telling
> me that only criminals could possibly want plausible deniability, but
> Ubuntu is rather closed minded most of the time when it comes to this
> sort of thing.

Generally, when I've interfaced with Ubuntu developers, they've had rock
solid reasons on why something does or does not get implemented. It's
never been due to hard heads or closed minds, as you suggest.

--
. O . O . O . . O O . . . O .
. . O . O O O . O . O O . . O
O O O . O . . O O O O . O O O

From: Michael Iatrou on
When the date was Friday 16 of July 2010, Jordan Metzmeier wrote:

> On 07/15/2010 08:46 PM, Michael Iatrou wrote:
> > I am skeptical whether there is any good reason for tools like wipe2fs,
> > zerofree and friends (if there are any...), when a dd && sync && rm
> > have the same result.
>
> You could say this about many things. These commands make things
> convenient. Why do those things manually when software can do it for you?
>
> Example:
>
> Under the same logic I could say that there no good reason for dget. I
> can manually wget the .dsc, .tar.orig and .changes to accomplish the
> same thing... but why when I can just dget the .dsc?

This is rather a philosophical question than a technical one: it is part of
UNIX mentality to have simple tools that can be put together to complete
complicated tasks. Practically seen, if the original poster was educated
with the principles of UNIX design, he wouldn't try to find a specialized
tool to perform a simple task.

Just my 2cents.

--
Michael Iatrou


--
To UNSUBSCRIBE, email to debian-user-REQUEST(a)lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster(a)lists.debian.org
Archive: http://lists.debian.org/201007161242.41719.m.iatrou(a)freemail.gr
From: Jordan Metzmeier on
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On 07/16/2010 01:42 PM, Michael Iatrou wrote:
>
> This is rather a philosophical question than a technical one: it is part of
> UNIX mentality to have simple tools that can be put together to complete
> complicated tasks. Practically seen, if the original poster was educated
> with the principles of UNIX design, he wouldn't try to find a specialized
> tool to perform a simple task.
>
> Just my 2cents.
>

I personally like "the right tool for the right job" philosophy.
Although, either dd or the specialized tools could have been consider
the "right tool".

If you spend more time looking for the "right tool" than you would have
done so manually, then I would be inclined to agree.

- --
Jordan Metzmeier

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)

iQIcBAEBCAAGBQJMQJzZAAoJEKj/C3qNthmT69YQALfxZpZSg9GBm7CF5GCRTxQM
y+3NamzP6LYH0JQi6z90rVcX8SFEL9AJFOy6yPjIr42AGU6aWYT61fr1FtpHwzDq
stAtiS2bCDl0UBtxpyPNwWVVuo3zVPl1tTZpO7qpIiNovgFrh33ZJlkKMzN+IsIN
fhxDp8sD2dQflwKp41Uvi7eDS1biTMZzWLC5d5IKvphpKf7qe+BtYwV97UF1Q60L
VwP7oIukB2HFVRgI4OSPxSRAcpXkZYN9yoowv7zc6NGPLMBGwbxhfO0ufkmCeiPQ
NFJ8P+i02NYOZfwfixyI6Mtg0eXHkAms1V8Y9cPiuguBXKiP+3DEv26z6TcFUPGR
H0XQjDwPPyAA/tPmmk207MRban6iiAiYTnPCWyVtAksvBLSegaWqQ9dQiHjxeXb0
JNKruycSgDSkMc/7fswdlFQXB24I1sVGjMovSnVv2svxU85aQjUkqE2YBgu6+hJ6
n+3Zflsa6b7hr9ejyHPagmyIHaH1a4Z5DQCs3/qvT/Htnplf84nSNyK6EGxnXuP5
BWmSHr+vUX9z6XFehmH4CwyxqrIZjIOVcuUke8AHdcs+feMf6wvY+luoJ8yXutrv
xGL/0nas9r4cip5XYKUUvsSQOUlvRylNl5Qz465Frxlhakw9sV8W7MVJet0LGiBO
sDvvXiNcJbLyzRobFQ0P
=JUQV
-----END PGP SIGNATURE-----


--
To UNSUBSCRIBE, email to debian-user-REQUEST(a)lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster(a)lists.debian.org
Archive: http://lists.debian.org/4C409CD9.2090207(a)gmail.com
From: H.S. on
On 16/07/10 01:42 PM, Michael Iatrou wrote:
>
> This is rather a philosophical question than a technical one: it is part of
> UNIX mentality to have simple tools that can be put together to complete
> complicated tasks. Practically seen, if the original poster was educated
> with the principles of UNIX design, he wouldn't try to find a specialized
> tool to perform a simple task.

he he .. no, I am not formally educated in Unix design principles. But
you appear to be. Could you give any insights on the proposal I posted
in my first post? Is it suitable? Is it not simple enough with Unix
tools? Is there something simpler? While, of course, keeping the
aforementioned constraints in mind.





--

Please reply to this list only. I read this list on its corresponding
newsgroup on gmane.org. Replies sent to my email address are just
filtered to a folder in my mailbox and get periodically deleted without
ever having been read.


--
To UNSUBSCRIBE, email to debian-user-REQUEST(a)lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster(a)lists.debian.org
Archive: http://lists.debian.org/i1q665$p1v$1(a)dough.gmane.org
From: Andre Majorel on
On 2010-07-15 13:55 -0400, H.S. wrote:
> On 15/07/10 01:38 PM, Perry E. Metzger wrote:
>
> > dd if=/dev/zero of=/dev/scd bs=1M
>
> Yes, but that would wipe out everything, the OS as well.
>
> I was looking for just making the already deleted files
> unrecoverable by a casual user. In other words, since a deleted
> file frees the space on disk, by filling up the disk with all
> zeros and then deleting that zeros file would be overwriting the
> earlier deleted files with zero. Am I correct in this?

Yes. The data you write to the new file has to be stored somewhere
and the only sectors available are those previously allocated to
the deleted files.

If you're feeling paranoid, you could fill with junk instead of
NULs to protect against any optimisation at filesystem level.

perl -e '$bytes = int (1e4 + 1e6 * rand);
for $n (1..$bytes) { $noise .= chr (int (rand (256))) }
while (print $noise) {}' >/mnt/sdc1/zeros.bin; sync

--
Andr� Majorel <http://www.teaser.fr/~amajorel/>
bugs.debian.org, food for your spambots.


--
To UNSUBSCRIBE, email to debian-user-REQUEST(a)lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster(a)lists.debian.org
Archive: http://lists.debian.org/20100716190342.GB29449(a)aym.net2.nerim.net