From: Aidan Van Dyk on
* Robert Haas <robertmhaas(a)gmail.com> [100720 13:04]:

> 3. Clone the origin once. Apply patches to multiple branches by
> switching branches. Playing around with it, this is probably a
> tolerable way to work when you're only going back one or two branches
> but it's certainly a big nuisance when you're going back 5-7 branches.

This is what I do when I'm working on a project that has completely
proper dependancies, and you don't need to always re-run configure
between different branches. I use ccache heavily, so configure takes
longer than a complete build with a couple-dozen
actually-not-previously-seen changes...

But *all* dependancies need to be proper in the build system, or you end
up needing a git-clean-type-cleanup between branch switches, forcing a
new configure run too, which takes too much time...

Maybe this will cause make dependancies to be refined in PG ;-)

It has the advantage, that if "back patching" (or in reality, forward
patching) all happens in 1 repository, the git conflict machinery is all
using the same cache of resolutions, meaning that if you apply the same
patch to 2 different branches, with identical code/conflict, you don't
need to do the whole conflict resolution by hand from scratch in the 2nd
branch.

> 5. Use git clone --shared or git clone --references or
> git-new-workdir. While I once thought this was the solution, I can't
> take very seriously any solution that has a warning in the manual that
> says, essentially, git gc may corrupt your repository if you do this.

This is the type of setup I often use. I have a "central" set of git
repos that I have automatically straight mirror-clones of project
repositories. And they are kept up-to-date via cron. And any time I
clone a work repo, I use --reference.

Since I make sure I don't "remove" anything from the reference repo, I
don't have to worry about loosing objects other repositories might be
using from the "cache" repo. In case anyone is wondering, that's:
git clone --mirror $REPO /data/src/cache/$project.git
git --git-dir=/data/src/cache/$project.git config gc.auto 0

And then in crontab:
git --git-dir=/data/src/cache/$project.git fetch --quiet --all

With gc.auto disabled, and the only commands ever run being "git fetch",
no objects are removed, even if a remote rewinds and throws away
commits.

But this way means that the seperate repos only share the "past, from
central repository" history, which means that you have to jump through
hoops if you want to be able to use git's handyj
merging/cherry-picking/conflict tools when trying to rebase/port
patches between branches. You're pretty much limited to exporting a
patch, changing to a the new branch-repository, and applying the patch.

a.

--
Aidan Van Dyk Create like a god,
aidan(a)highrise.ca command like a king,
http://www.highrise.ca/ work like a slave.
From: "Kevin Grittner" on
Robert Haas <robertmhaas(a)gmail.com> wrote:

> 2. Clone the origin n times. Use more disk space. Live with it.
:-)

But each copy uses almost 0.36% of the formatted space on my 150GB
drive!

-Kevin

--
Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

From: Peter Eisentraut on
On tis, 2010-07-20 at 13:28 -0400, Aidan Van Dyk wrote:
> But *all* dependancies need to be proper in the build system, or you
> end
> up needing a git-clean-type-cleanup between branch switches, forcing a
> new configure run too, which takes too much time...

This realization, while true, doesn't really help, because we are
talking about maintaining 5+ year old back branches, where we are not
going to fiddle with the build system at this time. Also, the switch
from 9.0 to 9.1 the other day showed everyone who cared to watch that
the dependencies are currently not correct for major version switches,
so this method will definitely not work at the moment.


--
Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

From: Peter Eisentraut on
On tis, 2010-07-20 at 13:04 -0400, Robert Haas wrote:
> 2. Clone the origin n times. Use more disk space. Live with it. :-)

Well, I plan to use cp -a to avoid cloning over the network n times, but
other than that that was my plan. My .git directory currently takes 283
MB, so I think I can just about live with that.


--
Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

From: Andrew Dunstan on


Robert Haas wrote:
> Tom and, I believe, also Andrew have expressed some concerns about the
> space that will be taken up by having multiple copies of the git
> repository on their systems. While most users can probably get by
> with a single repository, committers will likely need one for each
> back-branch that they work with, and we have quite a few of those.
>
> After playing around with this a bit, I've come to the conclusion that
> there are a couple of possible options but they've all got some
> drawbacks.
>
> 1. Clone the origin. Then, clone the clone n times locally. This
> uses hard links, so it saves disk space. But, every time you want to
> pull, you first have to pull to the "main" clone, and then to each of
> the "slave" clones. And same thing when you want to push.
>
>
>

You can have a cron job that does the first pull fairly frequently. It
should be a fairly cheap operation unless the git protocol is dumber
than I think.

The second pull is the equivalent of what we do now with "cvs update".

Given that, you could push commits direct to the authoritative repo and
wait for the cron job to catch up your local base clone.

I think that's the pattern I will probably try to follow.

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers