Here’s how you setup Keybase to mirror (automatically backup) your GitHub repo:
git remote set-url --add --push origin keybase://team/name/repo
git remote set-url --add --push origin firstname.lastname@example.org:team/repo.git
The advantage is that if GitHub goes offline or decides to remove your data, your code is protected!
But it’s likely that not all committers have Keybase installed, and you don’t want to put an extra step or hurdle to getting your backups in order – introducing fragility, adding a point of failure. In that case, how do you ensure that your hard work is seamlessly backed up?
Presenting a solution: https://github.com/codeforcash/github-to-keybase-mirror
This project deploys a serverless function and generates a webhook url that automatically pushes any commits to your Keybase git remote.
I had the good fortune of collaborating with a developer, Mark Koenig, who did the hard work of implementing the project. He shared this writeup:
I learned a lot about Lambda functions and the AWS framework in general. I was surprised to see that when I tested locally the function would work perfectly, but once I deployed it to the AWS servers it would not work at all. I found that this is based on differences between the AWS sam-local Docker container and the production environment.
The sam-local container has different permissions for its file structure than the production Lambda containers. So instead of accessing the Keybase binaries right in their standard folder
/var/task/gopath/binI had to copy them to the
/tmpfolder so that I could update their permissions with chmod.
In the Lambda environment, you can only write to the
/tmpfolder, but Keybase kept trying to write to
/home/user/...to make new directories and especially to make and access its
keybased.sockfile. This was compounded by the fact that if I changed the ‘HOME’ environment variable on the local Docker container, the function would not run at all! I was overjoyed when I found in the Keybase documentation that I could bypass both of these problems by instead adding a
$XDG_RUNTIME_DIRenvironment variable and setting that to
/tmp. For more details on that, see the XDG Base Directory specification. Once I figured that out, it was pretty much smooth sailing for the rest.
All in all, even though it was challenging and frustrating at times, it was still pretty fun and very satisfying once it finally worked.
Thanks to dxb for feedback on this post. Thanks to Alex Markley for pointing out that this is an easier, simpler solution that current techniques.