Learn practical Salesforce deployment best practices and tips from expert developer Atlas Can for smooth salesforce deployment operations.
In this article, we offer guidance on how to successfully tackle Salesforce deployments. You will learn about timing, sandbox usage, metadata management, testing, governance, and the critical role of DevOps in ensuring smooth operations. Through practical examples, you'll gain insights into the intricacies of Salesforce deployments!.
Here are our 5 key takeaways:
Ah, deployment day! For those that are uninitiated, Salesforce deployments can be quite difficult to tackle at first. (I’m looking at you, solo admins and developers - the unsung heroes of the Salesforce world!) Deployments can fail for many reasons. I’m sure you have been there - it could be that there are duplicate or inactive picklist values, a missing relationship field, missing code coverage, incorrect and missing permissions or visibility settings. Get just one of these wrong and suddenly your users are receiving mission critical errors and you are up all night trying to track them down. All of this can be a rude awakening!
This article aims to provide an overview of a few best practices and some specific examples to consider when you are preparing a Salesforce deployment. If you are new to deploying with Salesforce or looking for a refresher, you’ve come to the right place.
A common pattern I see from experience is that as your DevOps strategy increases in complexity, you end up trading off time spent deploying for time spent maintaining a DevOps process. For a low-code platform like Salesforce this can often be overkill, especially for small and medium sized teams. How can you plan properly so you don’t over-engineer on DevOps but also save a lot of time on your deployments? What’s the right balance to strike? Here are some practical tips.
Best practice involves scheduling deployments during non-peak user activity periods to prevent disruption. It is also advisable to maintain a routine release schedule, deploying changes consistently and regularly, rather than in large, infrequent batches. This consistency helps to reduce risk and improve system stability.
Salesforce offers several types of sandboxes, each designed for different purposes, such as development, testing, or training. Understanding the right sandbox to use for the job at hand is critical to efficient and safe deployment. This can significantly streamline the development and deployment process while reducing the chance of errors.
In Salesforce, it's important to limit the number of users who have deployment permissions. By restricting access, you can prevent inadvertent changes that could lead to complications during the deployment process. Strict control over deployment permissions helps ensure system integrity.
The Salesforce Metadata API powers all Salesforce deployments. Familiarizing yourself with the quirks and limitations of the API can help prevent deployment issues. Though powerful, the Metadata API has limitations that are important to understand for successful deployment.
Implementing good test classes and maintaining high code coverage can catch potential issues early before they hit production. This practice reduces risk and increases stability. A strong governance structure, meanwhile, can provide clear guidelines and oversight to streamline the deployment process and ensure accountability.
As Salesforce systems grow in complexity, a clear DevOps strategy becomes essential. DevOps best practices can improve collaboration between development and operations teams, expedite deployments, and avoid the accrual of technical debt. This will help your team adapt more quickly to changes and deliver more reliable solutions.
Timing is critical. Obviously avoiding deploying during peak times of user activity is a good start. Set up a time where most users won’t be active in the system. Salesforce locks metadata and apps when they are being used, so you’re going to get an error if many users are active. Having a routine release schedule every two to three weeks instead of deploying months of changes at once can make a big difference.
Logically decoupling different parts of the org that can be deployed separately will also ease your burden. There are many ways to do this. However, this requires more preparation and manual work upfront.
A simple approach can be to deploy Objects > Apex Classes > Visualforce Components > Visualforce Pages > Apex Triggers and other metadata while saving Profiles and Permission Sets for last.
The reason for deploying Objects first is to resolve dependencies early in the process. Compact Layouts and List Views are particularly painful from this perspective. As for Profiles, one should ensure all related metadata is deployed first because Profiles are an overarching layer on top of the org’s metadata that ties together many dependencies.
(This is one reason that Blue Canvas offers manual checklists where you can note and track the necessary steps required during deployments so everyone can be on the same page and track dependencies.)
What makes Salesforce unique compared to most alternatives is that it’s a business platform where you can develop directly in a production environment. When you have multiple sets of users with different sharing settings, complex business logic, and multiple integrations that depend on one another, things can get very complex. This is especially true for large orgs where scaling can dramatically increase some operations. Developing in sandboxes can help with this, but often it can be confusing which sandboxes to use for which parts of your pipeline.
Most metadata changes should be done in a developer sandbox or a scratch org, whether it’s creating an Object, changing a Layout or creating a new Tab. These should certainly not be done on your production orgs and rarely done directly in your Full Copy or Partial Copy sandboxes (Blue Canvas does support deployments from your upstream environments to lower Salesforce sandboxes, but this functionality should be used to port changes back down to your lower sandboxes post integration rather than as a tool to promote coding directly in production or upstream sandboxes!)
In an ideal scenario, any declarative or programmatic changes should be tested on developer sandboxes or scratch orgs and then pushed via pull request to upstream sandboxes where you would have the closest image to the production org (e.g. using a Full Copy or Partial Copy sandbox as UAT).
If you don’t do this, you’ll run into common problems such as:
Since Salesforce orgs are living applications that are changing constantly, admins and users might be tempted to edit Permissions Sets and Profiles, create or remove List Views for Objects directly in production. While this may seem safe at first, if you’re changing an overarching setting like a sharing rule or a group setting or a structural change in role hierarchy it can be dangerous to do those directly in production or a full sandbox. Processing those can take a very long time, especially in high data volume orgs. (You can leverage Parallel Sharing Rules and Deferred Sharing Maintenance for both deployments or decrease processing times, but that’s a topic for later.)
For example, if you are deploying a full Profile or Permission Set and the deployment package or content contains a sharing rule, it runs a calculation each time you try to deploy. If you have multiple deployment failures, you would need to run the sharing rule calculation for over and over. For this case, I suggest validating packages early and deploy sharing logic afterwards and working with sandboxes with less data (i.e. developer sandboxes). We should also consider that when these changes are done in any org, it causes Apex code to recompile and increase processing times.
A code review is also appropriate at this stage. Any type of change should be considered carefully and tested multiple times.
The purpose of Full Copy and Partial Copy sandboxes is to test things in an environment that is as close to production as possible. Regardless of how experienced you are with Salesforce, you will often uncover issues in your Full Copy and Partial Copy sandboxes because of the way that your code and configuration interacts with actual Salesforce data. For example, if you have multiple integrations you may run into performance problems if you have too many child records under one parent and when DML operations lock the records for too long.
As a best practice, you will want to run all test classes in a Full or Partial sandbox before you try to push to production. Too often I have seen teams trying to fix and validate everything at the time of deployment. This leads to long nights and weekends on the job instead of relaxing at home. Get your changes tested and validated in a Full or Partial Copy sandbox well before your planned deployment.
Another key for successful deployments is to regularly keep your sandboxes up to date with your production org. The refresh intervals for each type of sandbox are different and they can be leveraged to accomplish the things they are suited for. Developer sandboxes can be refreshed everyday, Partial Copy sandboxes have a 5 day limit and the Full Copy sandbox requires 29 days to refresh. Creating a schedule for when you can refresh will save you a ton of time and headaches at deployment time.
It’s also a good idea to carefully consider who can change metadata in your orgs. You should limit the number of users with the `Customize Application` permission at the Profile level and limit who has the API access to the org. These permissions grant many capabilities to users; to name only a few:
Hence these permissions should only be given to users with enough experience with the platform. With Blue Canvas you can more granularly control who has System Administrator access to production and create rules and permissions for who can deploy when and where without resorting to granting extensive access to everyone or needing to buy production Salesforce licenses for developers.
The Metadata API is what powers all Salesforce deployments including change sets, the Ant Migration tool, the Salesforce CLI, and any IDE or commercial DevOps tool you may use.
Change sets are the simplest way to interact with the Metadata API. While it is easy to use change sets, there are some drawbacks (this list is not exhaustive):
Ant Migration Tool & Salesforce CLI
If you elect to eschew change sets (which we do recommend!), you will have to become more familiar with either the Ant Migration tool or the Salesforce CLI. Even though Salesforce is pushing the CLI over Ant, a surprisingly high number of DevOps setups in the Salesforce world still rely on Ant scripts for deployment! There are many hidden gotchas when using the Metadata API. Here are just a few:
Large deployments are a pain. A common Metadata API problem is deploying packages that are too big (400mb, 10,000 files). This limit measures the uncompressed size of the data, so if you’re going over this you’re going to get a limit error. Developers also cannot use the retrieve function to call for big objects that are not indexed and there’s a 39mb compressed limit of the file.
This is why decoupling your metadata and deploying often is so important. It can be very difficult to scramble to decouple the dependencies after hitting a size limit on a critical deployment day, especially if your teams are not aligned and dependencies are not tracked.
Permissions challenges. There’s also the issue that the Metadata API returns only custom permissions but not user permissions. For Tabs when queried, the standard prefixed ones are not returned. It also returns missing information about standard objects when querying custom objects.
Limited support for certain metadata types. Many types of metadata in Salesforce require manual steps to deploy because they are not supported by the Metadata API.
Deploying Translations and Page Layouts correctly. Another quirk is when deploying Translations, you need to removeTranslation comments first and then update it after the deployment. There are also issues with Page Layouts where it returns all unpackagedLayout names where you have to manually edit the returned result.
Blue Canvas aims to address and manage many of these quirks with the Metadata API by bringing over 7 years of experience working with the Metadata API. Blue Canvas uses SFDX’s source format which makes it much easier to track and deploy your changes between your orgs. This can save tons of time if you’re working in a large org and you want to quickly compare metadata and make line-by-line edits if needed to ensure the package gets validated.
Writing good test classes and maintaining them is critical. A failing test class in a UAT environment can be a lifesaver if the tests were written according to the business logic and are functionally correct. This way issues can be addressed earlier before they hit production.
Code coverage also tells a story about your org. If tests are not written correctly, code coverage will not pass. If the tests are written correctly but you’re not hitting the necessary coverage then it’s a signal that the code being written is redundant and can be refactored to fit the tests executing the right scenario.
We’ll go into more detail on testing in a future post.
On top of everything we already mentioned, there’s also the responsibility of running an org properly. Governance is a big topic and a lot can be said about understanding the value of governance for technology organizations. Here are a few tips one should consider for creating a good governance structure in your org:
DevOps is becoming a pressing issue for many Salesforce teams as the ecosystem grows and matures. Over the past few years the amount of code and complexity for many teams has increased tremendously. More and more Salesforce products are being integrated into the product suite and each offers pathways to customization that must be maintained from nCino to Salesforce CPQ.
Teams who adopt the best practices and basic principles mentioned above will be far ahead of the pack and will avoid having to pay down technical debt in the future.
About the author
Atlas is a Salesforce Developer at EMPAUA and a moderator at SFXD . He shares what he has learned over his years of own experience in the Salesforce community. You can read more articles of him on SalesforceBen , and follow him Twitter. )
To stay ahead of the game, consider adopting Blue Canvas as your Salesforce deployment and DevOps partner. With our state-of-the-art features and tools designed to manage your deployment process smoothly, we make the complexity of Salesforce manageable and understandable. Don't wait until technical debt slows down your progress - Lets get in touch today!
What is the best time to deploy changes in Salesforce?
What is the role of sandboxes in Salesforce deployments?
What is the Salesforce Metadata API and its limitations?
Why is testing important in Salesforce deployments?
What is the role of governance in managing a Salesforce org?
What is the role of DevOps in Salesforce deployments?
Blue Canvas is excited to offer a free tool to the community for deploying Salesforce’s trickiest metadata type.
Dreamforce 2017 had more developer content than last year, with Salesforce DX being the main topic. Blue Canvas demoed their Salesforce DX integration and discussed simplifying Git flow and CI for Salesforce. Teams also discussed ways to get admins and less technical team members into source control, and a new flow for Salesforce development may need to be developed. SOX compliance and myTrailhead were also discussed, while there was less buzz about Einstein and Lightning.