Prev: Generating a derived class from a base class
Next: Why is the return type of count_if() "signed" rather than "unsigned"?
From: Walter Bright on 5 Jul 2010 23:39 Mathias Gaunard wrote: > On Jul 5, 6:07 pm, Walter Bright <newshou...(a)digitalmars.com> wrote: > >> Now another fellow on your team adds a global state dependency somewhere >> deep in >> that call hierarchy. How are you to know? Suppose someone else derives and >> overrides my_pure_function with an impure one? How are you to stop that? >> >> What if, between calls, someone modifies something s is transitively >> pointing to? > > If you can't enforce the project team to follow the coding standards > for that given project, and to read the comments that say "this > function is pure", then I think you already have some big problems. The problem with reliance on coding standards is: 1. People are human. Experts are human. They take shortcuts. They make mistakes. If they didn't, we could dispense with all compiler error checking. I don't think anyone is ready to get rid of type checking in C++ and rely on coding standards instead? 2. It's nearly impossible to manually verify if a particular function is pure or not by having manual code reviews. A coding standard that cannot be verified is programming based on hope and prayer. When you've got millions of lines of code, how practical is it to trust in hope and prayer? 3. If coding standards were reliable, nobody would need tools like Coverity, which are thriving. 4. I'm one of those guys who likes to work on his own car. But when it's night, cold, raining, I'm dressed in good clothes, and far from home, I want the freakin' car to work when I put the key in. And I want the compiler to check purity for me. It's a job well suited to automation. Would you rather push a button and get an instant *guaranteed* 100% check, or direct your (very expensive) staff to spend many boring, tedious hours doing a code review with only an 80% confidence they got it right? > Saying a function is pure is actually quite similar to what we do in > contract programming. People seem to do fine with contracts that are > simply documented, without any checking done by the compiler, even if > help would certainly help debugging. I see absolutely no advantage to relying on hope and prayer rather than guaranteed checking by the compiler. >> Because of this, I suspect that trying to use purity and immutability >> purely by >> convention is doomed to failure. >> >> (And also, since the compiler cannot know about its purity and >> immutability, it >> also cannot take any advantage of that information to produce better >> code. The >> compiler cannot even cache pointer to const values.) > > Some compilers (I don't know about Digital Mars, but I hope it does!) > allow to tag a function as pure using attributes. If they can deduce > with static analysis that the function isn't really pure, they could > also emit a warning. Non-standard extensions are not C++, and this thread is about C++0x for FP. My experience with adding non-standard extensions is, quite simply, nobody uses them. And I mean nobody, not even the person who tells me "it would be nice if you added an extension to do X" and convinced me to do it. Adding non-standard extensions to a C++ compiler is a complete waste of time. People won't use them because C++ is standardized and they want their code to be portable. It's a sensible sentiment. I did add contract programming to C++. Nobody uses it. In D, it's used often, because people expect it to be available in any D compiler. I did not add a function purity attribute to the Digital Mars C++ compiler. It wouldn't work anyway, as for a function to be usefully pure, the arguments to it all have to be transitively immutable, and there's no way to specify that in C++ without rebuilding and redesigning most of the type system. --- Walter Bright free C, C++, and D programming language compilers (Javascript too!) http://www.digitalmars.com -- [ See http://www.gotw.ca/resources/clcm.htm for info about ] [ comp.lang.c++.moderated. First time posters: Do this! ]
From: Walter Bright on 5 Jul 2010 23:45 nmm1(a)cam.ac.uk wrote: > I am afraid not. That is true for only some architectures and > implementations, and is one of the great fallacies of the whole > IEEE 754 approach. Even if a 'perfect' IEEE 754 implementation > were predictable, which it is not required to be. Can you elucidate where the IEEE 754 spec allows unpredictability? > Once you get away from the modern mainstream, it is fairly common > for a compiler to choose algorithms or hardware[*] based on dynamic > considerations. You then get unpredictable results. Sorry, but > but that really does happen, it's allowed by almost all languages > (including C++), and it isn't going to change. > > The point is that it is usually the software equivalent of branch > prediction, which leads to unpredictable times, but usually gives > enough of a performance gain that every architecture does it. It > is less common in compilers, but can give considerable speedups, > and I have seen it even at the operating system and hardware > levels. > > [*] Consider using a GPU only if another core isn't using it, or > switching between equivalent algorithms based on dynamic heuristics. > Parallelism brings this in, redoubled in spades, as reductions are > always unpredictable unless ridiculous contortions are taken to > avoid that. I understand that the FP may use higher precision than specified by the programmer, but what I was seeing was *lower* precision. For example, an 80 bit transcendental function is broken if it only returns 64 bits of precision. Other lowered precision sloppiness I've seen came from not implementing the guard and sticky bits correctly. Other problems are failure to deal properly with nan, infinity, and overflow arguments. I don't believe such carelessness is allowed by IEEE 754, and even if it was, it's still unacceptable in a professional implementation. (Just to see where I'm coming from, I used to do numerical analysis for Boeing airplane designs. I cared a lot about getting correct answers. The Fortran compilers I used never let me down. 30 years later, C and C++ compilers still haven't reached that level, and wonder why Fortran still is preferred for numerical work.) Here's a current example of what I'm talking about: http://www.reddit.com/r/programming/comments/cb8qv/incorrectly_rounded_conversions_in_gcc_and_glibc/ -- [ See http://www.gotw.ca/resources/clcm.htm for info about ] [ comp.lang.c++.moderated. First time posters: Do this! ]
From: Dragan Milenkovic on 5 Jul 2010 23:41 Andre Kaufmann wrote: > Joshua Maurice wrote: >> On Jul 4, 10:41 am, Andre Kaufmann <akfmn...(a)t-online.de> wrote: >>> On 04.07.2010 00:50, Dragan Milenkovic wrote: >>>> However, what would be really nice is something to make >>>> the pimlp idiom obsolete. :-D >>> Yep, a module concept is IMHO badly needed in C++. Unfortunately it has >>> been removed from the first C++0x proposals. >> >> Can you point at any proposals or ideas? I'm not sure how "modules" >> would remove the need for pimpl. pimpl is an idiom whose goal is to > > Why is pimpl needed in C++ ? > As you already correctly stated to decouple source code and to increase > compilation speed. > > But why can't that be done by the compiler automatically ? > Why do we have to use such an old stone age relict of code decoupling > like header files and pimpl in C++ ? > > Before I try to explain that, let's have a look how compilation would be > done by using C++ modules: > > Let's assume we have the following class in a single module file: > > class Test > { > public: void foo() {} > }; > > The C++ compiler has all the required information to generate > precompiled code and a precompiled header file, when the module is > imported for the first time shortly after compilation has started. > > E.g. in another module: > #import "Test" > > If the same module is used anywhere else in the same project the > compiler can (better said could) check if it already has compiled the > code and just use the precompiled header file (or code for inlining). > > No need to reparse the whole module again - why should the compiler do > that anyways ? It has already compiled the code !!! > > Now back to C++: > > Why can't a C++0x compiler just do the same ? > The simple answer is because of the dumb preprocessor. > Every translation unit can use different macros and therefore the > preprocessor can emit different code. > > The result is: > > a) The compiler has to compile the same code over and over again > b) The C++ developer, to prevent too much code to be recompiled, > has to decouple the code manually. > But as soon as templates are involved you are lost in C++ > and can't decouple appropriately anymore GCC features precompiled headers for C++. I believe it solves all mentioned problems. But what _I_ want is to remove private: part from the header :-D Seriously now, a smart pointer field can be used instead of a value for the same effect, but having a value is a bit cleaner (although it brings more headers to the game). IMHO, a C++ developer does not decouple code manually to prevent recompilation, but for the purposes of modularity, maintenance, good design, etc. Templates do make a nice mess, however. Modules should interact by the means of interfaces. I don't see any problem with this and nothing to improve. In your case it would be: class Test { public: virtual ~Test() = 0; virtual void foo() = 0; }; This is the only thing that the user of Test "module" will have to include. But if the module is made out of concrete classes and there are too many headers involved: http://gcc.gnu.org/onlinedocs/gcc/Precompiled-Headers.html One question - how do D modules get along with templates? -- Dragan [ See http://www.gotw.ca/resources/clcm.htm for info about ] [ comp.lang.c++.moderated. First time posters: Do this! ]
From: Edward Rosten on 5 Jul 2010 23:43 On Jul 4, 9:51 pm, Andre Kaufmann <akfmn...(a)t-online.de> wrote: > > It's an entirely different model. > > Many languages offer that. E.g. C#. That would be the .net platform rather than C# per-se. But, not one of those languages/platforms has anything like the portability of C++. Consider trying to make it work on a microcontroller with executable code stored in NOR flash... > And the model is not different. Why can't I simply take a C++ code like > > double value = sin(x)/x; > > (the formula has been entered in an edit box) > > compile it in the background and let it execute instead of interpret it. > Wouldn't that be faster than just interpreting the formula ? > I can exactly do that in C#. > > Or take a complex regular expression. Why can't I simply compile it to code. In C# I can do that. It would be a useful feature, to be honest. Then again, you can already do this (albeit in a non portable way) using the compiler, shared objects and dlopen. I think there are even libraries which package up this feature making is portable to all platforms which can run the compiler and have an appropriate CPU architecture and operating system. See also libjit. Basically, given how portable C++ is, there is *no* *possible* way of doing what you want, without having different C++ profiles in the standard. Whether or not that is a good thing is an exercise for the reader. > >> - Can I interchange / directly call code written in other languages > >> without writing C wrappers ? > > Yes. > > Ok - call Java code from C++ code which then calls C# code ? > Otherwise there wouldn't be a need for wrapper generators like SWIG. > > >> - Can I mix libraries written with different C++ compilers > > Can you do this in ANY language? > > C# for example. Again, I think you mean the .net platform, not C# per se. I believe the .nettified C++ can also do this. I may be wrong. Also, I'm slightly skeptical of the entire concept, given some experience of it. Languages tend to be different because their view of the world is different. The impedance mismatch between languages can make this tricky especially if one wants an idiomatic representation of data in the respective languages. For instance if one wants to represent (forgive the notation abuse) a std::vector<struct {int x, int y;}> in Python, the most appropriate type would quite probably be a Nx2 numpy array, not a python list of python classes, or (even worse) a wrapped version of a std::vector which forwards on the method calls and returns wrapped x,y structs. It is somewhat more straightforward if one just wants black-box classes with manipulator methods. > >> - Can I mix C++ code libraries compiled for different platforms > > Can you do this in ANY language? > > C# for example. No. You can only mix code compiled for the .net platform. Much like you can mix binary code compiled for your platform's ABI (or C++ ABI). > >> - Can I write portable GUIs in C++ (e.g. with Qt but, isn't standard) > > Can you do this in ANY language? > > Better than in C++. I'm not vastly experienced in the matter, but a few people I know really like QT. PyQT as it happens, but I gather that the C++ interface is pretty much as nice. > >> - I want to write binary data, e.g. a set of integers. > >> How can I do that, that every platform and C++ compiler can read it ? > >> (no fixed size types with defined binary layout) > > Can you do this in ANY language (except, maybe, Java?). But then, how > > important is this? > > If I use text format write and read data it isn't important. > But if I pass a structure to a library written in another one I have to care about if the integer is 16 or 32 bit wide and how the compiler aligns the structure members. Indeed, this all comes down to sharing an ABI or some sort. C++ itself does not specify an ABI, but many platforms do. > >> - Can I initialize class member objects / values directly where > >> they are defined. > > > Who cares? That's what constructors are for. > > class foo > { > Do you need are more detailed explanation about the advantages ? .... > Yes I want to call the constructor of the value where I define it -> > RAII - not have to write it multiple times in different constructors. > Though C++0x addresses this problem somewhat with delegating constructors. Yep. I agree. It is a pain, and also pretty much solved. -Ed -- [ See http://www.gotw.ca/resources/clcm.htm for info about ] [ comp.lang.c++.moderated. First time posters: Do this! ]
From: ThosRTanner on 5 Jul 2010 23:42
On Jul 6, 2:50 am, Andre Kaufmann <akfmn...(a)t-online.de> wrote: <snip> > > And with a two pass compiler (like C#) you don't even have to include > anything. The compiler just does want it should do - compile the code. > > (Disclaimer: I once thought header files and preprocessor to be a good > design decision too - now I think just the opposite) I think this would be a bad thing. With a 2 pass compiler, rather than separate headers (or at least interfaces) and implementation, if you alter the implementation, you need to recompile the clients. That's really bad (IMHO, YMMV, etc) One of the problems with C++ is that you don't really implement an interface. Instead you provide a header file which is part implementation, part private data, and part interface. This has the mildly unfortunate side effect that your clients can be dependent on your private data. pimpl alleviates that at the header level, but not at the implementation level. -- [ See http://www.gotw.ca/resources/clcm.htm for info about ] [ comp.lang.c++.moderated. First time posters: Do this! ] |