One of the most important principles in maintaining a good Salesforce sandbox strategy is downstream deployments. Unfortunately, far too few teams are doing regular, repeatable downstream deployments from their production environment into their staging and development environments.
What is a downstream deployment?
A downstream deployment is a deployment from your production environment down to one or more of your development or staging orgs. Depending on the size of your team you may have a single developer org that your deploy straight to production from, or you might have dozens. Regardless of your team size, a key component of a good sandbox strategy is keeping your test and development environments as close to production as possible. The reason for this is simple: you want to make sure that new features you make in a test environment will not unexpectedly fail when deployed to production. Given the complexity of dependencies in many Salesforce orgs, it is not uncommon for a team to see a shiny new feature deploy into production full of bugs when it was “working just fine on my sandbox.”
The Limits of Salesforce Sandbox Refreshes
Typically, teams use Sandbox refreshes to accomplish downstream updates. This is problematic though because sandbox refreshes are limited and slow. Depending on your org type, you can only refresh your sandbox every so often. Full copy orgs can only refreshed one a month. Even developer sandboxes can only be refreshed once every 5 days. If you are trying to achieve continuous delivery for Salesforce you will need to be able to refresh more often than that.
Furthermore, sandbox refreshes are slow. It can take hours or days to do a refresh.
Sandbox refreshes include both data and metadata. This can be annoying if you’ve configured a bunch of test data on your sandbox to work on features. You may want your metadata to mirror production, but not necessarily your data.
Finally, refreshes are all or nothing affairs. What if you want to do a downstream deploy from production without wiping out the work you’ve started? With a refresh there is no way to ignore certain components or files from production.
Using Git to Manage Downstream Deployments
Given the limitations of Salesforce sandbox refreshes many have sought a better way. Source control can help with this. At Blue Canvas we love Git, but you can apply the same principles to other source control systems like SVN.
Source control is ideal for downstream deployments because it handles some of the limitations of Sandbox refreshes. First of all, there are no limits to how frequently you can do a downstream
git merge. Second, because you are only touching metadata you don’t have to worry about losing your test data. And because it’s only metadata, the downstream merge is far more performant. Git is written in C and is designed to handle large changes in codebases almost instantaneously.
Maybe best of all, Git is designed to handle complex collaboration workflows. If you have multiple people working on the same sandboxes, or have features that are being worked on in a sandbox, you can still sync the sandbox to production without overwriting your features. You can be selective with which classes you want to include and which you want to exclude.
A Example of Selective Downstream Deployments
alpha-webinar I have been working on a change to BookPrice.cls. I started it a month ago, but other more urgent priorities emerged and I had to stop working on it while I made other changes that made it to production. Needless to say,
prod-webinar has changed quite a bit in the past month. There have been additions, deletions and modifications.
Before I start working on BookPrice.cls again, I want to pull all the changes from
prod-webinar down into my sandbox. This will give me confidence that all my tests on my sandbox are legitimate and consistent with what I will see in
prod-webinar. But I do not want to overwrite and delete the work that I have done on the new class.
With Git I can do this very easily. I just create a downstream Deployment Request from
I can see that there are 25 different files, 45 additions and 75 deletions between
alpha-webinar. Git clearly shows me that if I did a simple refresh, BookPrice.cls would be deleted. With Git and Blue Canvas I can simply remove the deletion from the destructiveChanges.xml with a simple click.
Now when I do the downstream mege, I will still have my BookPrice.cls from one month ago that I can keep working on. Also, all my test data remains the same so I don’t need to do anything to set it up again.
If something were to go wrong with any of this process, I can rest assured because Git keeps a continuous history of all changes to my Salesforce orgs. If I did accidentally delete BookPrice.cls I would be okay because I could simply pull up my Git history and see what I had done before and manually add it back in.
To test out downstream deployment requests set up an account at https://manage.bluecanvas.io.