From: Rzeźnik on 12 Oct 2009 11:04 On 12 Paź, 16:41, Andreas Leitgeb <a...(a)gamma.logic.tuwien.ac.at> wrote: > > Rzeźnik:  Rather than predict, what the user really wants to do with >   the API, rather give them a good base and let them override where >   they want custom behaviour. Control them by offering them a good >   API of public and protected methods so they can't do too nasty >   things. > > I must admit, that Rzeźnik's point of view is nearer to my own, but > then again, I might have misunderstood both. As for me, you expressed my thoughts correctly. Thank you
From: Arved Sandstrom on 12 Oct 2009 11:08 Rzeźnik wrote: > On 12 Paź, 14:32, Lew <no...(a)lewscanon.com> wrote: >> Mike Schilling wrote: >>>> Declare all concrete classes as final. >> Rzeźnik wrote: >>> ... if you are ominous >> I don't understand. > > I meant "omniscient", my mistake. > >>> If you are ominous [?] then this idea is great. I'd bet you are not. In >>> fact declaring everything as final is worse than most things you can >>> do to cripple program development. IMO every time you are >>> _introducing_ final you have to think deeply. >> Every time you omit 'final' in a class declaration you should consider >> carefully. Read the item in /Effective Java/ referenced upthread for a >> thorough explanation of why to prefer non-heritability. > > If there is such guidance there then the book is useless rubbish - but > I can't say for sure because I actually did not read this. > >> As an API writer (that is, one who writes a class intended for use) one should >> control how the API is used, not predict it. > > Agreed. So if we are on the same page - we do you try to predict how > your API will be used and 'contaminate' it with 'final's? Point being, if you mark a class as final, there's no more prediction involved - you have laid down the law. You can get mixed messages about using final for classes depending on who you read. For example, Goetz in 2002 (http://www.ibm.com/developerworks/java/library/j-jtp1029.html) does not mention composition once, and since the use of final on classes can often be a deliberate decision to discourage use of inheritance and promote composition, it's an odd omission. What he does mention about the use of final to enforce immutability is quite right, but I believe he's incorrect when he states that the use of final on classes discourages OO design. I think it's the other way around. Nobody is saying that inheritance is evil, just that it should be considered carefully when designing a class. It's not acceptable for starters to *not* think about it: both the decision to use the final keyword, and to not use it, should be thought out. It shouldn't be a default. Goetz in 2002, and from the sounds of it you at present, seem to fall into the camp of "don't mark a class as final unless you've got really good reasons to do it", whereas I fall into the camp of "don't _not_ mark a class as final unless you can explain why you want that class to be a base class." Inheritance is certainly not a bad thing, when properly used. I just ran some stats on a medium-sized project I am helping to maintain, and out of 2849 classes roughly a quarter of them (657 to be precise) extend another. The majority of them are good uses of inheritance: - creating custom exception classes (although I know that in some cases this was overly enthusiastic); - extending core classes that are meant to be so extended when doing JSF customizations; - concrete jobs that extend an abstract job that implements the Quartz Job interface; - all the JPA entities extend a @MappedSuperclass; - concrete serializer classes that extend a default serializer base class, the whole family meant to handle different types of inventory serial numbers; - concrete realizations of base classes (default or abstract) that provide commonality for managed beans in this application. And so forth. Bear in mind too that when Bloch and others warn about inheritance they are referring to implementation inheritance, and where the extensible classes are being used across package boundaries (IOW, probably not by the same people who wrote the base classes), and where the extensible classes are neither explicitly designed nor commented for inheritance. If all of those conditions obtain then you could have problems. AHS
From: Rzeźnik on 12 Oct 2009 11:35 On 12 Paź, 16:43, Leif Roar Moldskred <le...(a)huldreheim.homelinux.org> wrote: > Rze??nik <marcin.rzezni...(a)gmail.com> wrote: > > > Yes, a few :-) > > First of all: you answered your question already - it is done for > > preventing subclasses from redefining them. So it is a mechanism of > > enforcing architectural constraints in code. Let's add, very dangerous > > one because a mistake here may seriously cripple code re-usability. > > That might _sound_ omnious, but in practice I find that the only non- > trivial code that actually _can_ be gainfully inherited is code that has > explicitly been _designed_ to be inherited. > > If proper thought has gone into making code re-usable, then the use of > the "final" keyword won't be a problem as it will be where it should be > and not where it shouldn't. If proper thought _hasn't_ gone into making > the code re-usable, the use of the "final" keyword isn't likely to be a > problem since the code most likely won't be practically reusable > anyway. > Right but I do not think that shutting the door on inheritance is substantially better. Quite the opposite if you ask me. Even the code which was not designed upfront to be reusable may become, in a limited way, reusable in the unforeseen future. It may still be just easier to write well-behaving ancestor than redo all the work or deal with murky composition. And even if it is not going to be the case, you are not hurting anyone by omitting final. So I judge 'final' to be usable when it is obviously known that class will not be redefined ever (it is quite rare - I can think of dealing with 'equals' through subclasses, utility classes, classes representing the most concrete redefinitions in the Strategy pattern). Sometimes it suffices just to hide it within a package and export only an interface, that way you still possess the ability to redefine it 'silently'.
From: Rzeźnik on 12 Oct 2009 11:47 On 12 Paź, 17:08, Arved Sandstrom <dces...(a)hotmail.com> wrote: > Rzeźnik wrote: > > > Point being, if you mark a class as final, there's no more prediction > involved - you have laid down the law. > For good or for bad? > You can get mixed messages about using final for classes depending on > who you read. For example, Goetz in 2002 > (http://www.ibm.com/developerworks/java/library/j-jtp1029.html) does not > mention composition once, and since the use of final on classes can > often be a deliberate decision to discourage use of inheritance and > promote composition, it's an odd omission. What he does mention about > the use of final to enforce immutability is quite right, but I believe > he's incorrect when he states that the use of final on classes > discourages OO design. I think it's the other way around. > 'Final' is a part of OO design since it is tied to inheritance, so I agree with you here. > Nobody is saying that inheritance is evil, just that it should be > considered carefully when designing a class. It's not acceptable for > starters to *not* think about it: both the decision to use the final > keyword, and to not use it, should be thought out. It shouldn't be a > default. Goetz in 2002, and from the sounds of it you at present, seem > to fall into the camp of "don't mark a class as final unless you've got > really good reasons to do it", whereas I fall into the camp of "don't > _not_ mark a class as final unless you can explain why you want that > class to be a base class." That is tricky question which is not answerable in general. You might not know whether possibility of inheritance is sound at the point of making a decision. > > Inheritance is certainly not a bad thing, when properly used. I just ran > some stats on a medium-sized project I am helping to maintain, and out > of 2849 classes roughly a quarter of them (657 to be precise) extend > another. The majority of them are good uses of inheritance: > > - creating custom exception classes (although I know that in some cases > this was overly enthusiastic); There are two camps I believe - every exceptional condition should have its type against Occam's followers arguing that client is typically not interested in such distinctions. I cannot decide for myself really. > And so forth. Bear in mind too that when Bloch and others warn about > inheritance they are referring to implementation inheritance, and where > the extensible classes are being used across package boundaries (IOW, > probably not by the same people who wrote the base classes), and where > the extensible classes are neither explicitly designed nor commented for > inheritance. If all of those conditions obtain then you could have problems. > Now I have clearer picture of what Block really wrote. Anyway, implementation inheritance is just a variant of inheritance, it is neither better or worse than other types, just trickier to be justified.
From: Tom Anderson on 12 Oct 2009 13:15
On Mon, 12 Oct 2009, Rze?nik wrote: > On 12 Pa?, 10:52, "Mike Schilling" <mscottschill...(a)hotmail.com> > wrote: >> Rzeznik wrote: >>> On 12 Paz, 07:02, "Mike Schilling" <mscottschill...(a)hotmail.com> >>> wrote: >>>> markspace wrote: >>>>> Mike Schilling wrote: >>>>>> Lionel van den Berg wrote: >>>>>>> Hi all, >> >>>>>>> Just wondering what arguments there are out there for making >>>>>>> methods >>>>>>> and classes final when they are not expected to be overridden/ >>>>>>> extended? Typically I would make them final, but when I'm doing >>>>>>> code >>>>>>> reviews I don't raise this as an issue because I see it is >>>>>>> relatively >>>>>>> trivial. >> >>>>>> Some classes have not been designed to be extensible (either as a >>>>>> deliberate choice or because the time wasn't taken to make >>>>>> extensibility work correctly.) >> >>>>> This is the one I would emphasize. "Either design for inheritance >>>>> or >>>>> prevent it." Effective Java, I believe, by Joshua Bloch. >> >>>> There's a rule of thumb I was taught long ago that one shouldn't >>>> derive one concrete class from another. I've found it to be >>>> excellent >>>> advice. I can't explain particularly well why doing so is a bad >>>> idea >>>> in general, but whenever I've been tempted to break the rule, I've >>>> found that creating an abstract superclass (or a hierarchy of such >>>> superclasses) from which all concrete classes are derived has >>>> solved >>>> problems the concrete-derived-from-concrete design created. I don't >>>> think it's far wrong to say: >> >>>> Declare all concrete classes as final. >>> ... if you are ominous >> >>> If you are ominous then this idea is great. I'd bet you are not. >> >> I can be pretty threatening at times. Or do you mean "omniscient"? > > You are omniscient :-) I meant 'omniscient' :-) I suspected this, but am disappointed. I would love 'ominous' to become a technical software engineering term. public final void append(ominous String s) ... tom -- Science of a sufficiently advanced form is indistinguishable from magic |