Prev: Hilbert Transformer Designed Using the Frequency-Response Masking Technique
Next: Question about Gaussian smoothing
From: Nicholas Kinar on 19 Mar 2010 10:57 On 18/03/2010 9:04 AM, Jerry Avins wrote: > http://www.joelonsoftware.com/items/2010/03/17.html is about version > control. If you use VC software, it might be worth reading. > > Jerry That's an intersting article, Jerry. Personally, I use Canonical's Bazaar to manage revisions of software code. Here's a link showing the many flexible workflows which are possible with Bazaar: http://doc.bazaar.canonical.com/bzr.2.1/en/user-guide/bazaar_workflows.html There's also an excellent GUI for Bazaar available as well: http://doc.bazaar.canonical.com/explorer/en/
From: Les Cargill on 19 Mar 2010 19:14 Michael Plante wrote: > Steve Pope wrote: >> steveu <steveu(a)n_o_s_p_a_m.coppice.org> wrote: >> >>> Attribution Lost wrote >>>> steveu <steveu(a)n_o_s_p_a_m.coppice.org> wrote: >>>>> While endless branching is hard to see as a good thing, the "few >>> branches >>>>> as possible" mentality mostly comes about *because* you have used CVS. >>> It >>>>> has moulded your way of thinking, because it makes merging such a > PITA. >>>> If you're going to merge a branch, why branch off in the first >>>> place? >>>> >>>> I have a little trouble conceiving of cases where branching >>>> ever makes sense, other than political/legal situations >>>> (i.e. you need to form a code base that is clean of certain IP, >>>> for example). And then you're probably not going to merge >>>> later on, other than again from some political event (i.e. >>>> a command decision by a project manager). >>>> >>>> Polymorphism is there; use it. >>> So you never encounter people who want to go off and experiment? Maybe > you >>> don't work on big projects. >>> People are actually afraid to experiment to any significant extent in a >>> project controlled by CVS. They know if the experiment works out, they > will >>> have to practically re-implement the idea from scratch to ever be able > to >>> merge it with the main line code - unless development of that code has > been >>> very stagnant during the experimental period. >> If the experiment is larger than can be conducted in a >> sandbox, then you start checking it in. Using polymorphism, >> without breaking anything, making sure the regressions >> still pass. > > > I think this assumes a consistent vision. Some people on a project find > certain features more important than others. One can maintain that feature > off to the side privately, and rebase* those commits on top of main line > development until everyone else is ready. It doesn't necessarily mean the > code is broken; perhaps it can't be easily tested (intermittent failures > expected, if any, or perhaps not all the hardware is available to the > programmer in question). You can push it for others to see, but they don't > have to accept it. > > [*] rebasing is an automated way of taking a series of patches that > constitute a branch (or part of a branch) and "grafting" them on top of > another point in the repo. > > >> It's not a matter of not working on large projects. Very >> large projects use lots of polymorphism and are targeted >> at multiple applications, including experiments, without >> branching. > > Polymorphism, as I typically see the term used, involves some "from on > high" design, due to the sweeping scope of changes, which is counter to the > way most F/OSS projects work (once they're off the ground, anyway). Now if > you mean #ifdefs, as mentioned earlier, rather than class hierarchy and > similar design, then perhaps I can see that. > > Who decides what's an experiment and what's mature? What if the question > is asked when each "experiment" is pretty much ready? > > > > > >> But to say you need branches just to do large projects, >> I'm pretty unconvinced. > > Right, if you only have one person committing, it is easy to keep a linear > history in SVN even for relatively large code bases. > > Linear history isn't necessarily what some projects are after. I'm > observing a conflict between two mentalities: > > 1) knowing/remembering the path (and line of thinking) that led me to > making each change. This frequently consists of log messages reminding me > of debugging sessions in the distant past. > > 2) writing "the perfect patch" (and a patch is essentially a commit). This > is where each change is logically separate, and, more importantly, discards > much of the history of why it got the way it is, as long as it works and is > understandable. > > You don't really even have option 2 with SVN, in my experience, because > history rewriting isn't really feasible. Yeah, you can svnadmin dump/load > and do stuff with the backups, but who does that routinely just to clean up > commits so they're presentable? No one? So save the output of an "svn diff" for each "svn commit." Coupled with patch ( which has an "invert" option), it's completely symmetrical. <snip> -- Les Cargill
From: Michael Plante on 20 Mar 2010 00:47 Les Cargill wrote: >Michael Plante wrote: >> You don't really even have option 2 with SVN, in my experience, because >> history rewriting isn't really feasible. Yeah, you can svnadmin dump/load >> and do stuff with the backups, but who does that routinely just to clean up >> commits so they're presentable? No one? > >So save the output of an "svn diff" for each "svn commit." Coupled with >patch ( which has an "invert" option), it's completely symmetrical. You're right, you could do that (w/o ever touching svnadmin), but it's still a whole lot more trouble to do that in svn. That could potentially be automated, but, AFAIK, it hasn't been. It wouldn't be quite the same, but it's close enough (dump/load would allow you to remove revisions entirely, but that's not entirely necessary). The question, in a modified form, still stands: would anyone ever actually bother to do what you suggest *in svn*, or is it just too much of a pain? One thing I'm not sure about is whether the patches generated that way would apply cleanly less often than if the history were there (I don't know). Michael
From: Les Cargill on 20 Mar 2010 15:42 Michael Plante wrote: > Les Cargill wrote: >> Michael Plante wrote: >>> You don't really even have option 2 with SVN, in my experience, because >>> history rewriting isn't really feasible. Yeah, you can svnadmin > dump/load >>> and do stuff with the backups, but who does that routinely just to clean > up >>> commits so they're presentable? No one? >> So save the output of an "svn diff" for each "svn commit." Coupled with >> patch ( which has an "invert" option), it's completely symmetrical. > > You're right, you could do that (w/o ever touching svnadmin), but it's > still a whole lot more trouble to do that in svn. It's seconds per rev. I could see how patches may not have the transitive property - two patches might collide in a way to produce a defect that reversing either would fix - but that's not the general case. > That could potentially > be automated, but, AFAIK, it hasn't been. It wouldn't be quite the same, > but it's close enough (dump/load would allow you to remove revisions > entirely, but that's not entirely necessary). The question, in a modified > form, still stands: would anyone ever actually bother to do what you > suggest *in svn*, or is it just too much of a pain? > I do this. I attach a note to bug reports or feature requests with a diff. I still don't see why cvs or svn aren't sufficient. Maybe in an environment where a review committee must push all changes, but even then, there's a non-painful way. In that case, I'd have them review the diff, anyway. I have used either CVS or svn for over half of the last 21 years. I have not had any trouble with either. Indeed, so far as maintenance goes, unless you do it "diff driven", it's very difficult to keep track of where you are. I'd claim new development is but a special case of maintenance. Having to have a server is no defect; you probably want it on conditioned power and a UPS anyway. Might want RAID and other hardening as well. If you're assuming an IDE, that's different. I personally think IDE are a mistake (they're nothing but vendor lock in) , but that's just my opinion. Obviously if your organization is embracing UML, it might make sense. But CASE tools usually last less than five years in the marketplace, then IBM buys them and they slowly fade away. > One thing I'm not sure about is whether the patches generated that way > would apply cleanly less often than if the history were there (I don't > know). > The "patch" tool is completely independent of svn. > Michael > -- Les Cargill
From: robert bristow-johnson on 20 Mar 2010 21:57
On Mar 18, 11:47 am, "steveu" <steveu(a)n_o_s_p_a_m.coppice.org> wrote: > >http://www.joelonsoftware.com/items/2010/03/17.htmlis about version > >control. If you use VC software, it might be worth reading. > > SVN is one of the saddest open source projects. So much effort when into > reimplementing CVS without its problems, totally missing that its key > problem is the underlying project model it implements. > > Mercurial and Git seem to be slowly taking over, but it takes people I > while to get their heads around them. so Steve, what is to be gained with a VC that requires getting one's head around? ya know, i just can't do a friggin' thing with Xcode (oh, i use it to run simple argv and argc ANSI-compatible programs) but i used to be able to write a simple Mac application in the olden days. ya know, with windows and controls and pull-down menus and standardized Mac file I/O. i could deal with VC that was simple and didn't do merges. i just hate the awful file compare format that comes out of CVS or SVN. i liked the good ol' days when the IDS *app* did the file change "delta" (displaying with highlighting, similar to Wikipedia) and then the other basic dating and backup. because a merge is not provided, file status is very simple: checked-out or checked-in. when i was using Code Warrior, i was complaining because it wasn't as concise and logical as THINK C (never met the guy, but i think Michael Kahl wrote the best C compiler ever and the predecessor, Lightspeed C, was the original IDS). back in those days you could use MacDraw or MacDraft to quickly put together any object-oriented graphic and you could actually learn it in a single use. i still can't do anything with Illustrator. i use PC programs like Dia, but that's worse than ironic: i can't use a decent Mac program to do some basic graphics work so i end up using a PC??! Apple has fulfilled its destiny (i was so disgusted in the 90's with those intel ads on TV with guys in multicolored clean-room suits are dancing around with the first Pentiums). i mean, it's like voting for Obama and finding out he's Bush (this actually hasn't happened yet so just image it). can you sympathize at all with the disappointment? i just wonder if there will have to be another little revolution in computing like the Mac was originally with it's "Windows- emulating" user interface. computers are s'posed to serve *people*, not the other way around. r b-j |