Prev: Hilbert Transformer Designed Using the Frequency-Response Masking Technique
Next: Question about Gaussian smoothing
From: Erik de Castro Lopo on 19 Mar 2010 04:52 Steve Pope wrote: > If you're going to merge a branch, why branch off in the first > place? I often create branches to work on specific features or bug fixes. Doing it in a branch means that the trunk is always stable and working while I have absolute freedom in the branch. When I'm done with the new feature or bug fix in the branch I merge to the trunk and delete the branch. Erik -- ---------------------------------------------------------------------- Erik de Castro Lopo http://www.mega-nerd.com/
From: Erik de Castro Lopo on 19 Mar 2010 05:19 Erik de Castro Lopo wrote: > Steve Pope wrote: > > > If you're going to merge a branch, why branch off in the first > > place? > > I often create branches to work on specific features or bug fixes. > > Doing it in a branch means that the trunk is always stable and working > while I have absolute freedom in the branch. When I'm done with the > new feature or bug fix in the branch I merge to the trunk and delete > the branch. I should also mention that these new feature or bug fix branches hang around for as litle as a couple of hours or as much a year. Also, when I'm working on a branch, the trunk often gets minor fixes of merges from other branches. If I haven't worked on a particular branch for a while I usually merge from the trunk before continuing further. Erik -- ---------------------------------------------------------------------- Erik de Castro Lopo http://www.mega-nerd.com/
From: Michael Plante on 19 Mar 2010 05:22 Steve Pope wrote: >steveu <steveu(a)n_o_s_p_a_m.coppice.org> wrote: > >> Attribution Lost wrote > >>>steveu <steveu(a)n_o_s_p_a_m.coppice.org> wrote: > >>>>While endless branching is hard to see as a good thing, the "few >>branches >>>>as possible" mentality mostly comes about *because* you have used CVS. >>It >>>>has moulded your way of thinking, because it makes merging such a PITA. >>> >>>If you're going to merge a branch, why branch off in the first >>>place? >>> >>>I have a little trouble conceiving of cases where branching >>>ever makes sense, other than political/legal situations >>>(i.e. you need to form a code base that is clean of certain IP, >>>for example). And then you're probably not going to merge >>>later on, other than again from some political event (i.e. >>>a command decision by a project manager). >>> >>>Polymorphism is there; use it. >> >>So you never encounter people who want to go off and experiment? Maybe you >>don't work on big projects. > >>People are actually afraid to experiment to any significant extent in a >>project controlled by CVS. They know if the experiment works out, they will >>have to practically re-implement the idea from scratch to ever be able to >>merge it with the main line code - unless development of that code has been >>very stagnant during the experimental period. > >If the experiment is larger than can be conducted in a >sandbox, then you start checking it in. Using polymorphism, >without breaking anything, making sure the regressions >still pass. I think this assumes a consistent vision. Some people on a project find certain features more important than others. One can maintain that feature off to the side privately, and rebase* those commits on top of main line development until everyone else is ready. It doesn't necessarily mean the code is broken; perhaps it can't be easily tested (intermittent failures expected, if any, or perhaps not all the hardware is available to the programmer in question). You can push it for others to see, but they don't have to accept it. [*] rebasing is an automated way of taking a series of patches that constitute a branch (or part of a branch) and "grafting" them on top of another point in the repo. > >It's not a matter of not working on large projects. Very >large projects use lots of polymorphism and are targeted >at multiple applications, including experiments, without >branching. Polymorphism, as I typically see the term used, involves some "from on high" design, due to the sweeping scope of changes, which is counter to the way most F/OSS projects work (once they're off the ground, anyway). Now if you mean #ifdefs, as mentioned earlier, rather than class hierarchy and similar design, then perhaps I can see that. Who decides what's an experiment and what's mature? What if the question is asked when each "experiment" is pretty much ready? >But to say you need branches just to do large projects, >I'm pretty unconvinced. Right, if you only have one person committing, it is easy to keep a linear history in SVN even for relatively large code bases. Linear history isn't necessarily what some projects are after. I'm observing a conflict between two mentalities: 1) knowing/remembering the path (and line of thinking) that led me to making each change. This frequently consists of log messages reminding me of debugging sessions in the distant past. 2) writing "the perfect patch" (and a patch is essentially a commit). This is where each change is logically separate, and, more importantly, discards much of the history of why it got the way it is, as long as it works and is understandable. You don't really even have option 2 with SVN, in my experience, because history rewriting isn't really feasible. Yeah, you can svnadmin dump/load and do stuff with the backups, but who does that routinely just to clean up commits so they're presentable? No one? I'm not saying one option is necessarily better, but Linux kernel development prefers option 2. Frequently commits are reordered so they make more sense later on. This usually involves some temporary branches that are not merged. > >> Compare that to something >>like the development of the Linux kernel with GIT. Lots of people, many of >>whom hardly ever communicate, go off and experiment all the time. If their >>experiment works out well they need to get others interested. It usually >>proves straightforward to integrate their stuff, so some people may try it >>with their own current code. If a consensus builds that the experimental >>code is good stuff, final integration is also easy. > >>Easy integration has a profound and beneficial effect on working practices >>in large diverse projects. > >Still doesn't say why these experiments could not have been >checked into a main branch. Well, each person has his own main branch, "master". What if the next person in line doesn't entirely trust your main branch is up to snuff? Plus, keeping your own changes on top of whatever you consider to be upstream's changes is easier if you have your changes in a branch that you just continue to rebase farther along master. These are called "topic branches". And you typically have a handful of branches, each of which tracks someone else's branches (the ones you're interested in, anyway). >Do people not know how to write >code that doesn't break everything? Sometimes, but the advantage is present even for good coders. I'd have taken exactly the same attitude a few months ago before I started using Git, but I simply didn't know what I was missing. I had been using svn for a relatively short time compared to what some other people here quote (only 4 years), and I never even bothered to try to merge any branches, having heard the horror stories. I kept everything in my svn trunk and wrote good code. All was well, but I was the only one committing to that area of the repo. The only conflicts arose when I worked on multiple computers and failed to check works-in-progress in. When inevitable (but rare) mistakes occurred, another svn commit went on top, with a note in the log message about what revision the bug was introduced in. It's not clear if that's more or less readable in my case. >Do they not know how >to clean up the truly dead code? I don't see what dead code has to do with anything. And moreover, dead code (or buggy code) can be completely obliterated, making the history easier to read, if that's the way the maintainer(s) want it. >I'm just not convinced there's a problem here. It depends on your needs, I guess, but I wouldn't dismiss it out of hand before you try collaborating with others using such a tool for awhile. It's probably less useful for a one-person team, but might still come in handy once you have experience with the capabilities. Then you might start to think in terms of those primitives (thinking inside a larger box, to bastardize one metaphor).
From: steveu on 19 Mar 2010 06:20 >Steveu wrote: >>>steveu <steveu(a)n_o_s_p_a_m.coppice.org> wrote: >>> >>>>People are actually afraid to experiment to any significant extent in a >>>>project controlled by CVS. They know if the experiment works out, they >>will >>>>have to practically re-implement the idea from scratch to ever be able >>to >>>>merge it with the main line code - unless development of that code has >>been >>>>very stagnant during the experimental period. Compare that to something >>>>like the development of the Linux kernel with GIT. >>> >>>My recollection is they were gushing over how good SVN was >>>until it required a license fee, then all of a sudden GIT >>>was so much better. >>> >>>But I'm being cynical perhaps. >> >>SVN has always been free > >Yes. > >>and the The Linux kernel people never used it. > >Not from what I'm reading. The people who were willing to used bitkeeper >under an agreement with the company, and others had some ability to >interoperate using svn and cvs. Not to say they liked svn/cvs necessarily, >but some apparently used it. There have long been CVS/SVN repositories for the Linux kernel cooked up by outsiders, but not by Linus. Since the core development moved to Bitkeeper and on to GIT, CVS/SVN access to the main repository has been possible. >>They always said it was a waste of time. > >Didn't see that, but not surprised. Bitkeeper and Git are distributed, >svn/cvs are not. And Linus claimed to have been forced to use CVS for 7 >(?) years on another project and "hates it with a passion," claiming >emailing tarballs and patches are better than CVS. Until BitKeeper came along Linus didn't use any formal version control tools at all. I always found CVS slightly better than useless, and used it. Linus was convinced it was far worse than useless, and wouldn't entertain it. I see his point. >>They did use and love BitKeeper, >>which is commercial. > >And some complained about the license, but even RMS didn't deny the >technical superiority of BitKeeper here: > >http://www.linux.com/archive/articles/44465 > >"[...]but those in our community who value technical advantage above >freedom and community were susceptible to it." Apart from a few small wrinkles, I've heard very little criticism of BitKeeper's design or implementation. It was purely licencing issues that made developers of free software move on. I wonder how BitKeeper is doing today? >>When the relationship for using that went sour, Linus >>Torvalds implemented GIT, which is closely modeled on BitKeeper's >>functionality. Steve
From: Steve Pope on 19 Mar 2010 10:40
steveu <steveu(a)n_o_s_p_a_m.coppice.org> wrote: >SVN has always been free and the The Linux kernel people never used it. >They always said it was a waste of time. They did use and love BitKeeper, >which is commercial. When the relationship for using that went sour, Linus >Torvalds implemented GIT, which is closely modeled on BitKeeper's >functionality. Thanks for clearing this up for me. I have used BK on a couple projects. I seem to recall it distinguishes between a commit and a push, the first being more local, and that this was sort-of useful if you used it in an organized way... Steve |