Salesforce Developer, Atlas Can, offers some practical advice for how to make releases less painful in Salesforce.
We are excited to announce that Atlas Can, a Salesforce Developer at EMPAUA and a moderator at SFXD, will be joining us to write some blog posts and writing over the coming months. As a self-taught-admin-turned-developer, his mission is to share what he has learned over the years from his own experience and from some of the best people in the ecosystem. You can find his previous interviews and articles at SalesforceBen here. You can also find him on LinkedIn and Twitter. Look for more technical posts from Atlas in the coming weeks!
Ah, deployment day! For those that are uninitiated, Salesforce deployments can be quite difficult to tackle at first. (I’m looking at you, solo admins and developers - the unsung heroes of the Salesforce world!) Deployments can fail for many reasons. I’m sure you have been there - it could be that there are duplicate or inactive picklist values, a missing relationship field, missing code coverage, incorrect and missing permissions or visibility settings. Get just one of these wrong and suddenly your users are receiving mission critical errors and you are up all night trying to track them down. All of this can be a rude awakening!
This article aims to provide an overview of a few best practices and some specific examples to consider when you are preparing a Salesforce deployment. If you are new to deploying with Salesforce or looking for a refresher, you’ve come to the right place.
A common pattern I see from experience is that as your DevOps strategy increases in complexity, you end up trading off time spent deploying for time spent maintaining a DevOps process. For a low-code platform like Salesforce this can often be overkill, especially for small and medium sized teams. How can you plan properly so you don’t over-engineer on DevOps but also save a lot of time on your deployments? What’s the right balance to strike? Here are some practical tips.
Timing is critical. Obviously avoiding deploying during peak times of user activity is a good start. Set up a time where most users won’t be active in the system. Salesforce locks metadata and apps when they are being used, so you’re going to get an error if many users are active. Having a routine release schedule every two to three weeks instead of deploying months of changes at once can make a big difference.
Logically decoupling different parts of the org that can be deployed separately will also ease your burden. There are many ways to do this. However, this requires more preparation and manual work upfront.
A simple approach can be to deploy Objects > Apex Classes > Visualforce Components > Visualforce Pages > Apex Triggers and other metadata while saving Profiles and Permission Sets for last.
The reason for deploying Objects first is to resolve dependencies early in the process. Compact Layouts and List Views are particularly painful from this perspective. As for Profiles, one should ensure all related metadata is deployed first because Profiles are an overarching layer on top of the org’s metadata that ties together many dependencies.
(This is one reason that Blue Canvas offers manual checklists where you can note and track the necessary steps required during deployments so everyone can be on the same page and track dependencies.)
What makes Salesforce unique compared to most alternatives is that it’s a business platform where you can develop directly in a production environment. When you have multiple sets of users with different sharing settings, complex business logic, and multiple integrations that depend on one another, things can get very complex. This is especially true for large orgs where scaling can dramatically increase some operations. Developing in sandboxes can help with this, but often it can be confusing which sandboxes to use for which parts of your pipeline.
Most metadata changes should be done in a developer sandbox or a scratch org, whether it’s creating an Object, changing a Layout or creating a new Tab. These should certainly not be done on your production orgs and rarely done directly in your Full Copy or Partial Copy sandboxes (Blue Canvas does support deployments from your upstream environments to lower Salesforce sandboxes, but this functionality should be used to port changes back down to your lower sandboxes post integration rather than as a tool to promote coding directly in production or upstream sandboxes!)
In an ideal scenario, any declarative or programmatic changes should be tested on developer sandboxes or scratch orgs and then pushed via pull request to upstream sandboxes where you would have the closest image to the production org (e.g. using a Full Copy or Partial Copy sandbox as UAT).
If you don’t do this, you’ll run into common problems such as:
Since Salesforce orgs are living applications that are changing constantly, admins and users might be tempted to edit Permissions Sets and Profiles, create or remove List Views for Objects directly in production. While this may seem safe at first, if you’re changing an overarching setting like a sharing rule or a group setting or a structural change in role hierarchy it can be dangerous to do those directly in production or a full sandbox. Processing those can take a very long time, especially in high data volume orgs. (You can leverage Parallel Sharing Rules and Deferred Sharing Maintenance for both deployments or decrease processing times, but that’s a topic for later.)
For example, if you are deploying a full Profile or Permission Set and the deployment package or content contains a sharing rule, it runs a calculation each time you try to deploy. If you have multiple deployment failures, you would need to run the sharing rule calculation for over and over. For this case, I suggest validating packages early and deploy sharing logic afterwards and working with sandboxes with less data (i.e. developer sandboxes). We should also consider that when these changes are done in any org, it causes Apex code to recompile and increase processing times.
A code review is also appropriate at this stage. Any type of change should be considered carefully and tested multiple times.
The purpose of Full Copy and Partial Copy sandboxes is to test things in an environment that is as close to production as possible. Regardless of how experienced you are with Salesforce, you will often uncover issues in your Full Copy and Partial Copy sandboxes because of the way that your code and configuration interacts with actual Salesforce data. For example, if you have multiple integrations you may run into performance problems if you have too many child records under one parent and when DML operations lock the records for too long.
As a best practice, you will want to run all test classes in a Full or Partial sandbox before you try to push to production. Too often I have seen teams trying to fix and validate everything at the time of deployment. This leads to long nights and weekends on the job instead of relaxing at home. Get your changes tested and validated in a Full or Partial Copy sandbox well before your planned deployment.
Another key for successful deployments is to regularly keep your sandboxes up to date with your production org. The refresh intervals for each type of sandbox are different and they can be leveraged to accomplish the things they are suited for. Developer sandboxes can be refreshed everyday, Partial Copy sandboxes have a 5 day limit and the Full Copy sandbox requires 29 days to refresh. Creating a schedule for when you can refresh will save you a ton of time and headaches at deployment time.
It’s also a good idea to carefully consider who can change metadata in your orgs. You should limit the number of users with the `Customize Application` permission at the Profile level and limit who has the API access to the org. These permissions grant many capabilities to users; to name only a few:
Hence these permissions should only be given to users with enough experience with the platform. With Blue Canvas you can more granularly control who has System Administrator access to production and create rules and permissions for who can deploy when and where without resorting to granting extensive access to everyone or needing to buy production Salesforce licenses for developers.
The Metadata API is what powers all Salesforce deployments including change sets, the Ant Migration tool, the Salesforce CLI, and any IDE or commercial DevOps tool you may use.
Change sets are the simplest way to interact with the Metadata API. While it is easy to use change sets, there are some drawbacks (this list is not exhaustive):
Ant Migration Tool & Salesforce CLI
If you elect to eschew change sets (which we do recommend!), you will have to become more familiar with either the Ant Migration tool or the Salesforce CLI. Even though Salesforce is pushing the CLI over Ant, a surprisingly high number of DevOps setups in the Salesforce world still rely on Ant scripts for deployment! There are many hidden gotchas when using the Metadata API. Here are just a few:
Large deployments are a pain. A common Metadata API problem is deploying packages that are too big (400mb, 10,000 files). This limit measures the uncompressed size of the data, so if you’re going over this you’re going to get a limit error. Developers also cannot use the retrieve function to call for big objects that are not indexed and there’s a 39mb compressed limit of the file.
This is why decoupling your metadata and deploying often is so important. It can be very difficult to scramble to decouple the dependencies after hitting a size limit on a critical deployment day, especially if your teams are not aligned and dependencies are not tracked.
Permissions challenges. There’s also the issue that the Metadata API returns only custom permissions but not user permissions. For Tabs when queried, the standard prefixed ones are not returned. It also returns missing information about standard objects when querying custom objects.
Limited support for certain metadata types. Many types of metadata in Salesforce require manual steps to deploy because they are not supported by the Metadata API.
Deploying Translations and Page Layouts correctly. Another quirk is when deploying Translations, you need to removeTranslation comments first and then update it after the deployment. There are also issues with Page Layouts where it returns all unpackagedLayout names where you have to manually edit the returned result.
Blue Canvas aims to address and manage many of these quirks with the Metadata API by bringing over 7 years of experience working with the Metadata API. Blue Canvas uses SFDX’s source format which makes it much easier to track and deploy your changes between your orgs. This can save tons of time if you’re working in a large org and you want to quickly compare metadata and make line-by-line edits if needed to ensure the package gets validated.
Writing good test classes and maintaining them is critical. A failing test class in a UAT environment can be a lifesaver if the tests were written according to the business logic and are functionally correct. This way issues can be addressed earlier before they hit production.
Code coverage also tells a story about your org. If tests are not written correctly, code coverage will not pass. If the tests are written correctly but you’re not hitting the necessary coverage then it’s a signal that the code being written is redundant and can be refactored to fit the tests executing the right scenario.
We’ll go into more detail on testing in a future post.
On top of everything we already mentioned, there’s also the responsibility of running an org properly. Governance is a big topic and a lot can be said about understanding the value of governance for technology organizations. Here are a few tips one should consider for creating a good governance structure in your org:
DevOps is becoming a pressing issue for many Salesforce teams as the ecosystem grows and matures. Over the past few years the amount of code and complexity for many teams has increased tremendously. More and more Salesforce products are being integrated into the product suite and each offers pathways to customization that must be maintained from nCino to Salesforce CPQ.
Teams who adopt the best practices and basic principles mentioned above will be far ahead of the pack and will avoid having to pay down technical debt in the future.
How to add static code analysis with CodeScan to your DevOps pipeline so you can move fast without breaking things.
Salesforce Developer Bo Laurent has published an ingenious fix for a perplexing Salesforce Developer Console bug.