Dual Master Git Repositories

One of the nice things about git, is the ability to work in a distributed manor. Instead of having to have a central repository for your source code, you can create a copy of your repository and do work, while sharing the changes, on any number of machines. Often, when a few developers share code, they still use git with a central repository. There isn’t anything wrong with this, but sometimes, I find myself wanting to set something like this when I only want to share code between a couple computers where I am the only developer. In this case, I don’t need 3 copies of the code. I want to be able to push or pull changes between each of the machines without having to push to a 3rd repository.

So, you might try the following steps:

# on the 1st machine
> git init .
> # make some code changes
> git add ...
> git commit
# 2nd machine
> git clone <1st machine ref>
> # make some changes
> git commit
> git push # here is the problem

When you push from the 2nd machine to the 1st, the HEAD on the 1st machine is updated. The problem is that your working copy is NOT updated. To fix this, there are two solutions.

Mediocre, recover from doing the push solution:

# warning, the following will destroy any local changes you had made.
> git reset --hard HEAD

If you had made changes on the 1st machine and forgot to commit them, you’d need to stash them, check out to a branch, or whatever.

Better way, avoid the reset all together

# on the 2nd machine
> git push origin master:refs/heads/tmp_branch_name
# on the 1st machine
> git merge tmp_branch_name
> git branch -d tmp_branch_name

Another solution would be to add a remote on the 1st machine that points to the 2nd machine. Then each machine can just pull from the other rather than doing any pushes at all.
Have fun.

This entry was posted in Programming and tagged , , . Bookmark the permalink.

2 Responses to Dual Master Git Repositories

  1. Hans Fugal says:

    A better way is to have a bare “hub” repository that you push/pull from each repository. But I admit that I do something similar to you, but I have settled on a pattern that’s a bit more predictable and easier to recover from when i screw it up:

    I set up the repo on “server”, then clone it on “laptop”. Then I edit .git/config on laptop and add this line to the remote (named origin by default):

    [remote “origin”]

    push = +refs/heads/*:refs/remotes/laptop/*

    Then when I do git push everything is pushed but no HEAD is changed and no working directory is messed up. Then on server I do git merge laptop. You can also work with multiple branches, etc.

    Likewise when you git fetch server, you have the freedom to merge or rebase or whatever you want to do explicitly on the server’s status. Or you can git pull of course.

    The only thing to watch for is occasionally you’ll want to do a git remote prune to clean up the old branches that you deleted but whose deletion didn’t get propagated.

  2. Dennis says:

    The cool thing is, you can really do about whatever you want. We use bare server repos all the time. Sometimes, I work on a couple machines, but I don’t want to push to the server repo (not ready for co-workers to get the updates etc). I can push back and forth between my machines, edit until I like things, then finally push to the server. I could also share some changes with a co-worker, but not push to the server. When I finally push and he pulls, git is smart enough to not have a single issue. git isn’t perfect but it sure beats older scm technology.

Comments are closed.