Prev: Is LISP divine?
Next: keywords & macros
From: Nathan Froyd on 15 Apr 2010 22:29 On Apr 15, 5:05 pm, Juanjo <juanjose.garciarip...(a)googlemail.com> wrote: > On Apr 15, 10:06 pm, Nathan Froyd <froy...(a)gmail.com> wrote: > > > * Restrict the set of operations that ASDF supports to a few: compile, > > > load & test > > > Please don't do this. There are very good reasons to support more > > than three operations on a system; I've written at least two custom > > operations for my own personal use (building tar files and autogenerating > > documentation); I'm sure you can think of more. > > Answer this: assume that ASDF becomes a library and can provide you > with an annotated graph with all the components of a system, > topologically sorted, each one futher annotated with the files it > produces. > > Do you really need then the current system that ASDF has or wouldn't > it suffice to just traverse the list, a-la MAPC, with a function that > creates a tar file? > > The problem is not extensibility, but that ASDF right now is so > confusing that it does not allow people to realize there are other > ways to do things than with methods. Simplifying the set of operations > would allow creating the previous graph in a simpler form: one graph > for LOAD and one graph for COMPILE, the former one being the one you > actually need because it lists what a running image needs -- compiled > files, documentation, etc. I don't understand what creating the graph and then traversing it buys you versus writing methods that are calling by generic traversal code. Maybe it's simpler to understand. Maybe. I personally find ASDF straightforward...it's also possible that upstream ASDF is significantly more complicated than the ASDF distributed by SBCL, though. The whole point here is that if you want to annotate the graph, a given annotation depends on what operation you're doing. In C terms, `make doc' is different is from `make program' or `make library'. Writing the Meant-To-Rule-Them-All system definition facility and saying, "oh, well, if you want to do anything more than LOAD or COMPILE or test, you can grovel through a list of all the files in the system"...that's not very useful. Perhaps you mean to have a way of providing that sort of extensibility; it's not clear from your descriptions thus far that you're aiming for that level of flexibility, though. I do concur that ASDF it's perfect. The relationship between ASDF and ASDF-Install needs some help. Writing the main system and the test system separately is a little strange, but I think it'd be a pain to write all the methods that would unify them in a single system. Extensions are a pain to write and dependencies likewise. I could enumerate problems for a while. But ASDF seems to be Good Enough for the moment. -Nathan
From: RPG on 15 Apr 2010 23:17 On Apr 13, 7:14 pm, p...(a)informatimago.com (Pascal J. Bourguignon) wrote: > Juanjo <juanjose.garciarip...(a)googlemail.com> writes: > > What I would like to learn is what people think they have learnt that > > ASDF does wrong, things it gets right and to what extent the Common > > Lisp community is willing to accept any progress along one or another > > direction. > > My use of ASDF is rather limited. > > - One thing I find wrong as a developer is that I would have to write > the dependencies. I don't. I write programs to sort out the > dependencies automatically and to generate the ASD files > automatically. If the ASD file is a byproduct, is it really needed? Should every user of every system do this dependency generation system him/herself? If not, perhaps the tool is the right thing, but it's also correct to distribute the ASDF file. I'm not convinced that we are yet at the stage of groveling automatically, though. Especially not for our systems that interact with external functions. But I am certainly looking forward to getting this new tool and using it! > > - As a user, ASDF is usable. But obviously, it should provide a > simplier API than OPERATE for users, everyone is writing his own > asdf-load function... This is being added in ASDF 2. I agree though --- the original ASDF seemed to take pride in forcing users to engage with its internals at what seems an unnecessary level of detail. Somewhat like making a user crank over the car engine, because it's important to know that cranking is part of what makes the car start. These issues should largely go away in ASDF 2.
From: RPG on 15 Apr 2010 23:26 On Apr 15, 3:06 pm, Nathan Froyd <froy...(a)gmail.com> wrote: > On Apr 13, 6:29 pm, Juanjo <juanjose.garciarip...(a)googlemail.com> > wrote: > > > * Restrict the set of operations that ASDF supports to a few: compile, > > load & test > > Please don't do this. There are very good reasons to support more > than > three operations on a system; I've written at least two custom > operations > for my own personal use (building tar files and autogenerating > documentation); > I'm sure you can think of more. I would like to second this. More and more I find that I write systems that combine Lisp with other tools. It is common for me to write new operations in my ASDF files. It might be reasonable to think harder about what the BUILDING BLOCKS for such new operations and components should be --- I think ASDF got a number of things wrong here --- but not take this ability away. .... > > * Replace the routine that drives ASDF, TRAVERSE, with something > > everybody understands. > > Method combination would be a good start here. TRAVERSE was written > the way it was because of poor CLOS support from some implementations. Method combination, with all due respect, is a red herring here. What is needed is not a new software tool, or even a refactoring, but a specification. The component that generates the build/load/operate plan (TRAVERSE) should be specified clearly and (I can almost hear the howling) formally using the tools of theoretical computer science, especially graph theory. If we can say what TRAVERSE /should do/, which I believe has everything to do with computing a topological ordering on a particular graph, it will be a lot easier to do it, and to check its correctness when it's done. If someone wants to make a successor for ASDF, let's have a RFP for what its successor /should/ do, first, instead of just banging together more code.
From: Tim X on 15 Apr 2010 23:30 Juanjo <juanjose.garciaripoll(a)googlemail.com> writes: > On Apr 15, 10:06 pm, Nathan Froyd <froy...(a)gmail.com> wrote: >> This is the big sticking point in switching to something else. There >> are good reasons why side-effectful bits are useful in an .asd file. >> Custom component definitions and specializing operations on those components >> are what come immediately to mind > > I argue that this can be done in a "declarative" manner. That would be > ok because it would also impose a stricter syntax and we can enforce > further requirements. > >> > * Restrict the set of operations that ASDF supports to a few: compile, >> > load & test >> >> Please don't do this. There are very good reasons to support more >> than three operations on a system; I've written at least two custom >> operations for my own personal use (building tar files and autogenerating >> documentation); I'm sure you can think of more. > > Answer this: assume that ASDF becomes a library and can provide you > with an annotated graph with all the components of a system, > topologically sorted, each one futher annotated with the files it > produces. > > Do you really need then the current system that ASDF has or wouldn't > it suffice to just traverse the list, a-la MAPC, with a function that > creates a tar file? > > The problem is not extensibility, but that ASDF right now is so > confusing that it does not allow people to realize there are other > ways to do things than with methods. Simplifying the set of operations > would allow creating the previous graph in a simpler form: one graph > for LOAD and one graph for COMPILE, the former one being the one you > actually need because it lists what a running image needs -- compiled > files, documentation, etc. > >> > * Simplify the way components are written, perhaps adding some >> > syntactic sugar. >> > * Create an API to load and traverse the set of systems and their >> > dependencies. >> >> This would be something useful standalone; coaxing a list of files out >> of ASDF can be an exercise in frustration. Doubly so if you want >> system-relative names. > > This is precisely what I meant and what jasper so clearly exposed: > ASDF is just a hidden database of dependencies. We only need to > provide a clean mean to expose those dependencies and make them as > simple and well specified as possible > >> > * Replace the routine that drives ASDF, TRAVERSE, with something >> > everybody understands. >> >> Method combination would be a good start here. TRAVERSE was written >> the way it was because of poor CLOS support from some implementations. > > An even better version would not need methods at all! What makes > TRAVERSE so unpredictable is that class dependencies, method > specialization, :around, :after, and other apparent niceties can make > a developer blow everything up -- to name a few things, ASDF recently > changed the way it specializes some methods and it broke ECL's > extensions in various and really misterious ways. > > OO is nice, but it is a distraction in this camp. > I agree and think this exposes one of the reasons we have the situation we are in. To some extent, ASDF has been so powerful that the individual has been able to do things which on one had can make their life easier when you are only looking at a system from a single perspective, but once you move out into a wider world of possibilities, things get too complex to manage. I think there is merit in much of what Jasper has pointed out. the discussions have also shown that there is a broad cross-section of differing views as to what exactly ASDF or any system definition framework is meant to do. It is this lack of consensus, combined with the flexibility and loose specification that has created much of the complexity that causes problems with the existing system. Clearly articulating exactly what the core requirements are and then going back to first principals as suggested by Jasper would seem to be a good starting point. Once we have a good base, additional functionality could be added and would likely be easier to add as it would have more defined constraints to work within. The hard part could be getting consensus on what are the core requirements. My view would be to keep these as simple and basic as possible. Concise and clear definition of dependencies would seem to be the most critical base requirement. Tim -- tcross (at) rapttech dot com dot au
From: Tim X on 16 Apr 2010 00:15
m_mommer(a)yahoo.com (Mario S. Mommer) writes: > Tim X <timx(a)nospam.dev.null> writes: >> m_mommer(a)yahoo.com (Mario S. Mommer) writes: >> >>> Juanjo <juanjose.garciaripoll(a)googlemail.com> writes: >>>> On Apr 14, 6:14 am, Tim X <t...(a)nospam.dev.null> wrote: >>>>> 2. I would love a system similar to CPAN. I'd love to be able to just >>>>> issue the command CLAN install <some-lib> and hav it identify any >>>>> dependencies not available locally, download all necessary sources, >>>>> build them, run tests and if all goes well install everything. >>> >>> My experience with CPAN has always been negative. It just did not work >>> that way, and in fact for me it did not work. >>> >> >> That is interesting. In the 10+ years that I have used it, I've only had >> a couple of times where it has let me down and without exception, that >> was due to bad decisions on my part. This is not to say I think its >> perfect. There are a few issues I've learnt to watch out for, but in >> an overwhelming majority of times, it has worked without a hitch. I >> frequently find a search to identify possible modules, together with a >> bit of background searching to see if anyohe has had issues with those >> modules and then a simple cpan -i <module name> and a few minutes later >> I'm coding with that module. > > I know 0 perl, so I went by what the docs said on how to install > things. IIRC, I was trying to install one of the mail.yahoo to mbox > thingies. Neither the stuff packaged with debian or ubuntu, nor the > straight-from-upstream thing worked. Lots of errors, many pertaining to > hard-coded paths in whatever makefilelike facility was used, others > about incompatibility of modules. It just bombed over and over until I > just stopped. I repeated that exercise twice with a few years appart, > the last time about two years ago. > Sounds like either that was a badly package module, which is unusual, but not unknown or it was badly written documentation, which is more common than we would like. In the end, it sounds like you were unlucky and a victim of the balance between creating a system that enables/facilitates develpers distributing their modules and provides a reliable repository that others can use. >> The fact you have had problems with CPAN means you are likely to have >> some really valuable input into issues to watch out for, features that >> would be useful and things that may be best avoided. > > Well, as I said, 0 perl here. So I am afraid I can't really say much > beyond "it did not work for me". > How well do you think someone with zero CL would do installing ASDF based libs/applications? >>> Maybe a good starting point would be to find out if such a thing is >>> possible. I mean, it certainly sounds like a good idea, but given the >>> many decissions that are conditional on context (load-load-load-dump >>> vs. alternatives, for example) I am not too optimisitc. Make & co >>> certainly do not do it that way, I think. >> >> Well, its pretty difficult to determine if something is possible without >> trying to do it. Some concrete suggestons on how this could be done >> would be useful. > > ASDF went ahead and tried it, and did a reasonably good job to bring > things to one standard, so that's what we are complaining about know, > instead of a lack of standards. Without any sarcasm I'd say we have > progressed. I don't think anyone has suggested that ASDF hasn't been useful or hasn't improved the situation. Rather, I would argue that its time to look at progressing further. Using the experiences and knowledge obtained through the use of ASDF and seeing if we can do it better. One of the criticisms of ASDF is that it is very weak with respect to any form of standardisation. There is one school of thought that would argue this is a good thing and even a lispy way to do it. However, there is another school of thought which would recognise the advantages of a flexible and only loosely specified system, but also recognises that this can result in a lot of complexity, particularly when you need to work with libraries that have very different ways of using ASDF. My view of this current discussion is to look at ASDF (and possibly other solutions) to identify what it does well, what it doesn't do so well and see if we can come up with something even better. This could be as simple as just defining some basic conventions regarding how to use it or it could be a total architectual re-engineering and maybe not even called ASDF. As yet, I don't think we even have a clear view of exactly what we want in a system definition framework or even agreement on what we feel ASDF does well and does not do well. > >> What we need is concrete examples, not theoretical or vague 'gut >> feelings' regarding where problems may or may not exist. I think the >> object should be to improve the current situation, not aim for the >> ultimate perfect outcome. Even if we completely fail, we are likely to >> add additional knowledge that will be extremely useful for the next >> attempt! > > Writing an .asd to compile and link some fortran files and then load the > resulting .so is IMO already too complicated. > > If you want to use a compiler that is written in Lisp, the .asd itself > cannot be loaded without the compiler being loaded first, because the > symbols the compiler exports does not exist. That leads to the > asdf:operate forms at the beginning of .asd. > > I think the most important part is really about policy. If software > loader/builder scripts adhered to some common and good guidelines, and > implemented the relevant scripts using some common library that made > things easy to write and read, then we would be done. The general case > includes people programatically downloading patches and applying them, > etc, which is simply too much for a defsystem form to express cleanly > and easily. > I think these are exactly the types of issues that need to be looked at. and discussed. Can we do this better? Is ASDF the right tool/place for these concerns? Are these high priority or low priority requirements? etc. > Here's something on the lines of what I have in mind: > > (load-module "src" > (load-file "file1" > (load-file "file 2")) > (load-file "file3")) > > where load module would append the directory "src" to some special, and > the load-file would be implemented in a lazy fashion. This would cover > the simple cases. It is possible, and IMO better, to do all the > dependency tracking programatically and sanely, instead of doing it > using some DSL (defsystem keyword maze) and then going through hoops to > define and modify the dependencies and operators, etc. This is the type of use case we need to collect. However, we also need other use cases and identify exceptions, corner cases etc. I don't htink we can answer the question as to whether a programatic approach, a DSL approach or a hybrid approach is better until we udnerstand the range of possibilities better and we probably won't be able to define that until we have some consensus on what we expect the system to do and what we don't expect it to do. A well designed DSL may not be as difficult or painful as you may expect. Likewise, a clear understanding of using a programatic approach would likely facilitate the development of library utilities that would make such a process easier and help ensure people adhere to a common standard or policy. My feeling is that we are lagging behind in how we manage this. These days, we have whole operating systems and other languages that handle things like dependency management and build processes far more reliably than is currently the case with CL. As yet, I don't know what the answer is and I'm not even sure I udnerstand the full nature of the problem. I don't even expect we will get a perfect or even near perfect solution. However, I do believe we can improve matters. Tim -- tcross (at) rapttech dot com dot au |