Prev: Informatics 2010: 1st call extension - 30 April 2010
Next: The Eiffel Compiler tecomp version 0.22 released
From: Nilone on 18 Apr 2010 04:35 On Apr 17, 5:13 pm, "Dmitry A. Kazakov" <mail...(a)dmitry-kazakov.de> wrote: > On Sat, 17 Apr 2010 05:18:12 -0700 (PDT), Nilone wrote: > > I see tuples (and arrays, and relations) as derived types, not > > container types. > > I don't see why a container type cannot be a derived type. I don't have a problem with container types being derived types, only with tuples, arrays and relations being considered container types. > > > Using your terminology as best I can, a variable > > declared as a tuple(x : int, y : int) would be an object that stores > > and returns values from the set ZxZ, not an object containing two > > named int variables. > > I don't see any contradiction. The value of a tuple is a Cartesian product > of values, e.g. (1, 2). The object having that value contains two other > objects which values are 1 and 2. Where is a problem? The problem isn't with the values, but with the container. A sequence of things isn't the same as a sequence of things inside a container. It's the algebraic vs coalgebraic difference again. > Containment is an > abstraction to getter and setter. You can obtain objects from a container, > the way the compiler implements that is its business. A container is fine when I want a container. When I want a sequence of objects, a container is an additional level of indirection which can't add any value. > Consider an array of > Boolean packed into a register. Does it bother you that X[3] physically > does not exist in the memory? Well, I can't subclass a boolean, so covariance isn't going to be a problem, right? You seem to associate arrays with physical layout in memory. I don't see that the two are necessarily related. An array is a logical arrangement of things. The compiler can implement it how it wants to. > > >> It could be difficult if tuples hold ADTs. There are > >> certain problems with that. The most difficult ones are type relations like > >> between an array and its index, matrix and its plane, table and the result > >> set etc. > > > I don't see those problems yet. Would you mind pointing them out, or > > if you have any references that I could read, I would appreciate it. > > Let you have a tuple type as above: int x int. You also have some cloud of > types around it: > > 1. Table, a container of tuples > 2. Result sets: ordered containers of tuples, of ints (from the column x), > of ints (from the column y) > 3. A plethora of types coming from operations involving several tables > > All these types are related to each other and you need methods declared on > the table that use combinations of these types. Do you want to explicitly > declare all that each time you create a new tuple type? If I say I want a variable for a relation over two ints, the rest is implied. I don't, however, think of layers of containers, instead I think of one variable containing a complex value described by the type RELATION (X INT, Y INT), and generic methods that can derive relations, tuples and ints from that value. > > But that is not all. Let you derive positive from int. What happens with > the table? How the table, result sets of positive is related to the table > of ints? Can you reuse operations of the latter? If the positive type is a subset of the int type, then positive is already included in the relation and it applies without any modification. If you want to create a new relation over just positive values, that can be done too. Nilone
From: Dmitry A. Kazakov on 18 Apr 2010 04:57 On Sun, 18 Apr 2010 01:35:26 -0700 (PDT), Nilone wrote: > On Apr 17, 5:13�pm, "Dmitry A. Kazakov" <mail...(a)dmitry-kazakov.de> > wrote: >> On Sat, 17 Apr 2010 05:18:12 -0700 (PDT), Nilone wrote: >>> I see tuples (and arrays, and relations) as derived types, not >>> container types. >> >> I don't see why a container type cannot be a derived type. > > I don't have a problem with container types being derived types, only > with tuples, arrays and relations being considered container types. Tuples, arrays and records are classic examples of container types. >>> Using your terminology as best I can, a variable >>> declared as a tuple(x : int, y : int) would be an object that stores >>> and returns values from the set ZxZ, not an object containing two >>> named int variables. >> >> I don't see any contradiction. The value of a tuple is a Cartesian product >> of values, e.g. (1, 2). The object having that value contains two other >> objects which values are 1 and 2. Where is a problem? > > The problem isn't with the values, but with the container. A sequence > of things isn't the same as a sequence of things inside a container. A sequence is another word for ordered container. > It's the algebraic vs coalgebraic difference again. I see no reason to promote such difference. It does not exist to me. >> Containment is an >> abstraction to getter and setter. You can obtain objects from a container, >> the way the compiler implements that is its business. > > A container is fine when I want a container. When I want a sequence > of objects, a container is an additional level of indirection which > can't add any value. I seems that you consider containers having referential semantics. These are not equivalent. >> Consider an array of >> Boolean packed into a register. Does it bother you that X[3] physically >> does not exist in the memory? > > Well, I can't subclass a boolean, so covariance isn't going to be a > problem, right? Why cannot you. Why Boolean is better or worse than Employee? Why should not I derive Belnap_Logical from Boolean? > You seem to associate arrays with physical layout in memory. I do not. > I don't > see that the two are necessarily related. An array is a logical > arrangement of things. The compiler can implement it how it wants to. That is my point, not yours! (:-)) >>>> It could be difficult if tuples hold ADTs. There are >>>> certain problems with that. The most difficult ones are type relations like >>>> between an array and its index, matrix and its plane, table and the result >>>> set etc. >> >>> I don't see those problems yet. �Would you mind pointing them out, or >>> if you have any references that I could read, I would appreciate it. >> >> Let you have a tuple type as above: int x int. You also have some cloud of >> types around it: >> >> 1. Table, a container of tuples >> 2. Result sets: ordered containers of tuples, of ints (from the column x), >> of ints (from the column y) >> 3. A plethora of types coming from operations involving several tables >> >> All these types are related to each other and you need methods declared on >> the table that use combinations of these types. Do you want to explicitly >> declare all that each time you create a new tuple type? > > If I say I want a variable for a relation over two ints, the rest is > implied. I don't, however, think of layers of containers, instead I > think of one variable containing a complex value described by the type > RELATION (X INT, Y INT), and generic methods that can derive > relations, tuples and ints from that value. What is the profile of the operation SELECT, INSERT? >> But that is not all. Let you derive positive from int. What happens with >> the table? How the table, result sets of positive is related to the table >> of ints? Can you reuse operations of the latter? > > If the positive type is a subset of the int type, Types are not sets. The domain set of positive might be a subset of int. > then positive is > already included in the relation and it applies without any > modification. Really? Does it mean that I can insert (-1,-1) into the table? Note that positive is not an LSP subtype of int. But even if it were, the problem is when and how a container type (or any other type built by a type-algebraic operation) based on a subtype is a subtype of the corresponding container type, based on the supertype. This is not a trivial question. -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de
From: S Perryman on 18 Apr 2010 06:45 Dmitry A. Kazakov wrote: > On Fri, 16 Apr 2010 19:14:16 +0100, S Perryman wrote: >>All I am saying is that IMHO a type checking system has to have as a >>minimum (in order to be useful for developers in general) , the >>intellectual/proving equivalence of kids whose maths ability is at a >>particular level. > Are you sure that kids are Turing complete? (:-)) :-) What I contend is that the maths skills I use to construct and deduce type substitutability (utility and conflicts) in all the systems I develop, is of the level I had pre-university. If a type checker has the equivalent ability, then IMHO that would be enough to be very useful to allow much better type substitutability regimes than just basic structural equivalence etc. >>You aren't discarding modularity. >>Merely ensuring that multiple dispatch cannot fail for M (whenever the >>type system detects the possibility) for any given end program using M. >>This requires system-wide analysis (the end programs in C, Ada etc - >>where S is static) . You need some "closed world assumptions" in order to >>be certain that all combinations of input/output types will map to at >>least one impl of M. > Yes, but I want to be able to require all these combinations to be defined > at some declaration point of a new type. If some types were invisible at > this point, you should not be able to bring them in any context together > with the declared type. E.g. > procedure Print (P : in out Printer; S : Shape); > context A: see shape Ellipse, printers Epson, Cannon, HP > derive Circle from Ellipse > implemented Print for Epson, Cannon, HP and Circle > context B: see shape Ellipse > derive Brother for Printer > implemented Print for Brother and Ellipse > context C: want to use Circle and Brother - type error! > The problem is that this is too constraining. I need a way to add Print for > Circle and Brother later, where both types are frozen, and use it in C. >>For other envs (the dynamic loading / VM of Java etc) , you may not be >>able to do so. > Exactly. This is want I meant. (Modularity was probably not the best word) > I have a problem with type checks postponed that late. Indeed. Whenever the checks are done, they must be done before an executable program is allowed to be produced. Ideally at what is usually the "compile" phase, but if more practical to do it at the "link" phase, so be it. Regards, Steven Perryman
From: Dmitry A. Kazakov on 18 Apr 2010 10:47 On Sun, 18 Apr 2010 11:45:27 +0100, S Perryman wrote: > What I contend is that the maths skills I use to construct and > deduce type substitutability (utility and conflicts) in all the systems I > develop, is of the level I had pre-university. > > If a type checker has the equivalent ability, then IMHO that would be > enough to be very useful to allow much better type substitutability > regimes than just basic structural equivalence etc. No doubt, but presently we cannot build a system as "simple" as a bee. Pre-university kids seem a little more intelligent... By that I only mean that this were not a good way to estimate the complexity of the problem. It might be simple, complex or "overly complex" (like when checking types were more complex than designing the program itself). > Ideally at what is usually the "compile" phase, but if more practical to do > it at the "link" phase, so be it. But that would in effect disable dynamic linking. I don't care much about memory, it is cheap. The problem is with distributed systems, as we more and more advance in that end. So, to me linking/binding is closer to run time than to compile time. -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de
From: Dmitry A. Kazakov on 18 Apr 2010 10:49
On 18 Apr 2010 11:39:54 GMT, Stefan Ram wrote: > Kids and cats are interactive systems, which > might be /more/ than turing complete. See: > > http://www.engr.uconn.edu/~dqg/papers/myth.pdf Nonsense, a Turing-complete system processing an incomputable input (real time readings, random series etc) is still what it is. A larger system enclosing the input source in itself is different. But this were a typical subject change. Whether [higher?] living organisms are incomputable in a proper sense and not merely by inclusion the Universe, nobody knows. But if they are, that is not because they are used interactively. An interactively applied spade is still a spade. It's amazing what kind of rubbish is published nowadays. (Was it published in a peer reviewed magazine?) When eons ago ELIZA (http://en.wikipedia.org/wiki/ELIZA) was developed nobody would call it Turing-incomplete. The example with car driving is just pathetic. I have a similar one: 1+1 is incomputable because in order to use the CPU you should take care of all myriads of electrons bubbling in there. -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de |