From: Mark on
On Tue, Jul 27, 2010 at 10:29 AM, Volkan YAZICI <yazicivo(a)ttmail.com> wrote:

> On Tue, 27 Jul 2010, Stan Hoeppner <stan(a)hardwarefreak.com> writes:
> > NASA trusts it with over 1PB of storage, but _you_ don't trust it? Who
> are
> > you again? How many hundreds of TB of storage do you manage on EXT3/4?
> ;)
>
> NASA also trusts Windows and NTFS too?
>

NASA also backs up their data on 5.25" floppy disks [1].

[1] *completely made up information
From: B. Alexander on
We use XFS in production at work. Where I work, we are routinely dealing
with hundreds of terabytes of data (I have heard the word "petabyte" bandied
about in several meetings), so we are beyond or hovering on the edge of the
size limits and performance limits of the ext filesystems.

At home, I primarily do reiserfs, for the simple reason that I have had need
in the past (more than one would guess) where I have needed to shrink a
filesystem. In fact, I needed to do so on a box at work.

Right now, I am trying to get my brain around the improvements in btrfs, and
hoping that will take off as many say it will.



On Tue, Jul 27, 2010 at 1:03 PM, Aniruddha <mailingdotlist(a)gmail.com> wrote:

> On Tue, Jul 27, 2010 at 6:19 PM, Stan Hoeppner <stan(a)hardwarefreak.com>wrote:
>
>> Volkan YAZICI put forth on 7/27/2010 8:22 AM:
>>
>> > You are missing a very important point: Durability to power failures.
>> > (Excuse me, but a majority of GNU/Linux users are not switched to a UPS
>> > or something.) And that's where XFS totally fails[1][2].
>>
>> > [1]
>> http://linux.derkeiler.com/Mailing-Lists/Debian/2008-11/msg00097.html
>>
>> ....
>
>> a fantastic piece of FOSS into which many top-of-their-game
>> kernel engineers have put tens of thousands of man hours, striving to make
>> it
>> the best it can be--and are wildly succeeding.
>>
>> That's was very informative, thanks. You got me curious and I will test
> XFS on my home system. To be honest I am still little wary of using XFS in
> a production environment. For years now I have heard stories of power
> failures with catastrophic results when using XFS. Anyone who using XFS in
> a mission critical production environment? Anyone has experience with that?
>
From: Volkan YAZICI on
On Tue, 27 Jul 2010, Stan Hoeppner <stan(a)hardwarefreak.com> writes:
> I'd also like to add that anyone smart enough to be on this list is smart
> enough to know you should have a UPS, regardless of what filesystem you use.
> If you're not you shouldn't be here. If you disagree on the technical merits
> (not cost), you're uneducated and/or stubborn.

You are still not getting it, don't you? We have thousands of embedded
linuxes in the wild and they are just simple data aggragetors. You can't
have a power backup unit in such a condition.

I'd also like to add that anyone smart enough to be on this list is
smart enough to know you cannot have a UPS for embedded systems,
regardless of what filesystem you use. If you're not you shouldn't be
here. If you disagree on the technical merits (not cost), you're
uneducated and/or stubborn.


Regards.


--
To UNSUBSCRIBE, email to debian-user-REQUEST(a)lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster(a)lists.debian.org
Archive: http://lists.debian.org/87k4ogu7on.fsf(a)alamut.alborz.net
From: Klistvud on
Dne, 27. 07. 2010 19:48:53 je Mark napisal(a):
> >
> > NASA also trusts Windows and NTFS too?
> >
>
> NASA also backs up their data on 5.25" floppy disks [1].
>
> [1] *completely made up information
>
>

NASA as an authority on reliable storage? C'mon, the bozos can't even
be trusted with their own Challengers, Columbias, OR the astronauts
therein!

--
Regards,

Klistvud
Certifiable Loonix User #481801
http://bufferoverflow.tiddlyspot.com


--
To UNSUBSCRIBE, email to debian-user-REQUEST(a)lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster(a)lists.debian.org
Archive: http://lists.debian.org/1280254750.1966.1(a)compax
From: Stan Hoeppner on
Aniruddha put forth on 7/27/2010 12:03 PM:
> On Tue, Jul 27, 2010 at 6:19 PM, Stan Hoeppner <stan(a)hardwarefreak.com>wrote:
>
>> Volkan YAZICI put forth on 7/27/2010 8:22 AM:
>>
>>> You are missing a very important point: Durability to power failures.
>>> (Excuse me, but a majority of GNU/Linux users are not switched to a UPS
>>> or something.) And that's where XFS totally fails[1][2].
>>
>>> [1]
>> http://linux.derkeiler.com/Mailing-Lists/Debian/2008-11/msg00097.html
>>
>> ....
>
>> a fantastic piece of FOSS into which many top-of-their-game
>> kernel engineers have put tens of thousands of man hours, striving to make
>> it
>> the best it can be--and are wildly succeeding.
>>
>> That's was very informative, thanks. You got me curious and I will test XFS
> on my home system. To be honest I am still little wary of using XFS in a
> production environment. For years now I have heard stories of power failures
> with catastrophic results when using XFS. Anyone who using XFS in
> a mission critical production environment? Anyone has experience with that?

How about, and this will probably shock many of you:

1. Kernel.org

All of Linux source, including what becomes the Debian kernel, and the kernels
of all other Linux distros, is served from XFS filesystems:

"A bit more than a year ago (as of October 2008) kernel.org, in an ever
increasing need to squeeze more performance out of it's machines, made the
leap of migrating the primary mirror machines (mirrors.kernel.org) to XFS. We
site a number of reasons including fscking 5.5T of disk is long and painful,
we were hitting various cache issues, and we were seeking better performance
out of our file system."

"After initial tests looked positive we made the jump, and have been quite
happy with the results. With an instant increase in performance and
throughput, as well as the worst xfs_check we've ever seen taking 10 minutes,
we were quite happy. Subsequently we've moved all primary mirroring
file-systems to XFS, including www.kernel.org , and mirrors.kernel.org. With
an average constant movement of about 400mbps around the world, and with peaks
into the 3.1gbps range serving thousands of users simultaneously it's been a
file system that has taken the brunt we can throw at it and held up
spectacularly."


2. NASA Advanced Supercomputing Facility, NASA Ames Research Center
See my other post for details

3. Industrial Light and Magic -- ILM
At one time ILM had one of the largest installed SGI SAN storage systems on
the planet, may have been _the_ largest, running XFS. It backed their render
farm(s). They don't currently have any render system info on their site that
I can find. Given the number, size, and scope of their animation projects and
the size to which their rendering farm has grown, they may have very well
switched SAN vendors over the years. I don't know if they still use XFS or
not. I would think so given that they're working with multi hundred gigabyte
files daily.


Many, many others. What you have to understand is that XFS has been around a
long long time, 17 years in both IRIX and Linux. It's older than EXT2. Back
before cheap Intel/AMD clusters took over the supercomputing marketplace, SGI
MIPS IRIX systems with XFS owned upwards of 30-40% of that market. XFS in
various platforms and versions has been in government labs, corporations and
academia for over a decade. At one time Prof Stephen Hawking had his own
"personal" 32 CPU SGI Origin 3800 for running cosmology calculations in order
to prove his theories. It had XFS filesytems, as have all SGI systems since
1994.

Here's a list of organizations that have volunteered information to xfs.org.
It is by far not a complete list, and most of the major SGI customers with XFS
on huge SAN systems aren't listed. Note NAS at NASA Ames isn't listed.

http://xfs.org/index.php/XFS_Companies

--
Stan


--
To UNSUBSCRIBE, email to debian-user-REQUEST(a)lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster(a)lists.debian.org
Archive: http://lists.debian.org/4C4F25D5.1040206(a)hardwarefreak.com