From: Ingo Molnar on

* drepper(a)gmail.com <drepper(a)gmail.com> wrote:

> > For example, just to state the obvious: libaio has been written 8 years
> > ago in 2002 and has been used in apps early on. Why arent those kernel
> > APIs, while not being a full/complete solution, supported by glibc, and
> > wrapped to pthreads based emulation on kernels that dont support it?
>
> You never looked at the glibc code in use and didn't read what I wrote
> before. We do have an implementation of libaio using those interfaces.
> They exist in the Fedora/RHEL glibc and are probably copied elsewhere, too.
> The code is not upstream because it is not general enough. It simply
> doesn't work in all situations.

So it's good enough to be in Fedora/RHEL but not good enough to be in upstream
glibc? How is that possible? Isnt that a double standard?

Upstream libc presence is really what is needed for an API to be ubiquitous to
apps. That is what 'closes the loop' in the the positive feedback cycle loop
and creates real back pressure and demand on the kernel to get its act
together.

Again, i state it for the third time, the KAIO situation is mostly the
kernel's fault. But glibc is certainly not being helpful in that situation
either and your earlier claim that you are only waiting for the patches is
rather dishonest.

Thanks,

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Zachary Amsden on
On 03/18/2010 12:50 AM, Ingo Molnar wrote:
> * Avi Kivity<avi(a)redhat.com> wrote:
>
>
>>> The moment any change (be it as trivial as fixing a GUI detail or as
>>> complex as a new feature) involves two or more packages, development speed
>>> slows down to a crawl - while the complexity of the change might be very
>>> low!
>>>
>> Why is that?
>>
> It's very simple: because the contribution latencies and overhead compound,
> almost inevitably.
>
> If you ever tried to implement a combo GCC+glibc+kernel feature you'll know
> ...
>
> Even with the best-run projects in existence it takes forever and is very
> painful - and here i talk about first hand experience over many years.
>

Ingo, what you miss is that this is not a bad thing. Fact of the matter
is, it's not just painful, it downright sucks.

This is actually a Good Thing (tm). It means you have to get your
feature and its interfaces well defined and able to version forwards and
backwards independently from each other. And that introduces some
complexity and time and testing, but in the end it's what you want. You
don't introduce a requirement to have the feature, but take advantage of
it if it is there.

It may take everyone else a couple years to upgrade the compilers,
tools, libraries and kernel, and by that time any bugs introduced by
interacting with this feature will have been ironed out and their
patterns well known.

If you haven't well defined and carefully thought out the feature ahead
of time, you end up creating a giant mess, possibly the need for nasty
backwards compatibility (case in point: COMPAT_VDSO). But in the end,
you would have made those same mistakes on your internal tree anyway,
and then you (or likely, some other hapless project maintainer for the
project you forked) would have to go add the features, fixes and
workarounds back to the original project(s). However, since you
developed in an insulated sheltered environment, those fixes and
workarounds would not be robust and independently versionable from each
other.

The result is you've kept your codebase version-neutral, forked in
outside code, enhanced it, and left the hard work of backporting those
changes and keeping them version-safe to the original package
maintainers you forked from. What you've created is no longer a single
project, it is called a distro, and you're being short-sighted and
anti-social to think you can garner more support than all of those
individual packages you forked. This is why most developers work
upstream and let the goodness propagate down from the top like molten
sugar of each granular package on a flan where it is collected from the
rich custard channel sitting on a distribution plate below before the
big hungry mouth of the consumer devours it and incorporates it into
their infrastructure.

Or at least, something like that, until the last sentence. In short, if
project A has Y active developers, you better have Z >> Y active
developers to throw at project A when you fork it into project B.

Zach
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Ingo Molnar on

* Zachary Amsden <zamsden(a)redhat.com> wrote:

> On 03/18/2010 12:50 AM, Ingo Molnar wrote:
> >* Avi Kivity<avi(a)redhat.com> wrote:
> >
> >>>The moment any change (be it as trivial as fixing a GUI detail or as
> >>>complex as a new feature) involves two or more packages, development speed
> >>>slows down to a crawl - while the complexity of the change might be very
> >>>low!
> >>Why is that?
> >It's very simple: because the contribution latencies and overhead compound,
> >almost inevitably.
> >
> >If you ever tried to implement a combo GCC+glibc+kernel feature you'll know
> >...
> >
> >Even with the best-run projects in existence it takes forever and is very
> >painful - and here i talk about first hand experience over many years.
>
> Ingo, what you miss is that this is not a bad thing. Fact of the
> matter is, it's not just painful, it downright sucks.

Our experience is the opposite, and we tried both variants and report about
our experience with both models honestly.

You only have experience about one variant - the one you advocate.

See the assymetry?

> This is actually a Good Thing (tm). It means you have to get your
> feature and its interfaces well defined and able to version forwards
> and backwards independently from each other. And that introduces
> some complexity and time and testing, but in the end it's what you
> want. You don't introduce a requirement to have the feature, but
> take advantage of it if it is there.
>
> It may take everyone else a couple years to upgrade the compilers,
> tools, libraries and kernel, and by that time any bugs introduced by
> interacting with this feature will have been ironed out and their
> patterns well known.

Sorry, but this is pain not true. The 2.4->2.6 kernel cycle debacle has taught
us that waiting long to 'iron out' the details has the following effects:

- developer pain
- user pain
- distro pain
- disconnect
- loss of developers, testers and users
- grave bugs discovered months (years ...) down the line
- untested features
- developer exhaustion

It didnt work, trust me - and i've been around long enough to have suffered
through the whole 2.5.x misery. Some of our worst ABIs come from that cycle as
well.

So we first created the 2.6.x process, then as we saw that it worked much
better we _sped up_ the kernel development process some more, to what many
claimed was an impossible, crazy pace: two weeks merge window, 2.5 months
stabilization and a stable release every 3 months.

And you can also see the countless examples of carefully drafted, well thought
out, committee written computer standards that were honed for years, which are
not worth the paper they are written on.

'extra time' and 'extra buerocratic overhead to think things through' is about
the worst thing you can inject into a development process.

You should think about the human brain as a cache - the 'closer' things are
both in time and pyshically, the better they end up being. Also, the more
gradual, the more concentrated a thing is, the better it works out in general.
This is part of the basic human nature.

Sorry, but i really think you are really trying to rationalize a disadvantage
here ...

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Alan Cox on
> So it's good enough to be in Fedora/RHEL but not good enough to be in upstream
> glibc? How is that possible? Isnt that a double standard?

Yes its a double standard

Glibc has a higher standard than Fedora/RHEL.

Just like the Ubuntu kernel ships various ugly unfit for upstream kernel
drivers.

> kernel's fault. But glibc is certainly not being helpful in that situation
> either and your earlier claim that you are only waiting for the patches is
> rather dishonest.

I am sure Ulrich is being totally honest, but send him the patches and
you'll find out. Plus you will learn what the API should look like when
you try and create them ...

Alan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Zachary Amsden on
On 03/18/2010 11:15 AM, Ingo Molnar wrote:
> * Zachary Amsden<zamsden(a)redhat.com> wrote:
>
>
>> On 03/18/2010 12:50 AM, Ingo Molnar wrote:
>>
>>> * Avi Kivity<avi(a)redhat.com> wrote:
>>>
>>>
>>>>> The moment any change (be it as trivial as fixing a GUI detail or as
>>>>> complex as a new feature) involves two or more packages, development speed
>>>>> slows down to a crawl - while the complexity of the change might be very
>>>>> low!
>>>>>
>>>> Why is that?
>>>>
>>> It's very simple: because the contribution latencies and overhead compound,
>>> almost inevitably.
>>>
>>> If you ever tried to implement a combo GCC+glibc+kernel feature you'll know
>>> ...
>>>
>>> Even with the best-run projects in existence it takes forever and is very
>>> painful - and here i talk about first hand experience over many years.
>>>
>> Ingo, what you miss is that this is not a bad thing. Fact of the
>> matter is, it's not just painful, it downright sucks.
>>
> Our experience is the opposite, and we tried both variants and report about
> our experience with both models honestly.
>
> You only have experience about one variant - the one you advocate.
>
> See the assymetry?
>
>
>> This is actually a Good Thing (tm). It means you have to get your
>> feature and its interfaces well defined and able to version forwards
>> and backwards independently from each other. And that introduces
>> some complexity and time and testing, but in the end it's what you
>> want. You don't introduce a requirement to have the feature, but
>> take advantage of it if it is there.
>>
>> It may take everyone else a couple years to upgrade the compilers,
>> tools, libraries and kernel, and by that time any bugs introduced by
>> interacting with this feature will have been ironed out and their
>> patterns well known.
>>
> Sorry, but this is pain not true. The 2.4->2.6 kernel cycle debacle has taught
> us that waiting long to 'iron out' the details has the following effects:
>
> - developer pain
> - user pain
> - distro pain
> - disconnect
> - loss of developers, testers and users
> - grave bugs discovered months (years ...) down the line
> - untested features
> - developer exhaustion
>
> It didnt work, trust me - and i've been around long enough to have suffered
> through the whole 2.5.x misery. Some of our worst ABIs come from that cycle as
> well.
>

You're talking about a single project and comparing it to my argument
about multiple independent projects. In that case, I see no point in
the discussion. If you want to win the argument by strawman, you are
welcome to do so.

> Sorry, but i really think you are really trying to rationalize a disadvantage
> here ...
>

This could very well be true, but until someone comes forward with
compelling numbers (as in, developers committed to working on the
project, number of patches and total amount of code contribution), there
is no point in having an argument, there really isn't anything to
discuss other than opinion. My opinion is you need a really strong
justification to have a successful fork and I don't see that justification.

Zach
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/