A quick look at the best practices for Salesforce development that have emerged in the past 3 years and in the era of Salesforce DX.
In this article, we share our insights gleaned from managing over 350 million lines of Salesforce code, outlining the habits and strategies of the most effective Salesforce developer teams worldwide. We explore key practices like utilizing sandboxes, committing frequently to integration, conducting code reviews, backing up metadata, understanding Salesforce DX, and regularly refactoring and deleting code. Through our experience and the patterns we've observed, we aim to provide a roadmap to help Salesforce users improve their development processes and efficiency.
Here are our 5 Key Takeaways:
After three years on this journey to bring the best DevOps tooling to Salesforce, we’ve learned a lot. We’re currently managing over 350M lines of Salesforce code (and growing!), and we interact with dozens of customers and prospects every week.
We’re now getting enough data to see some patterns emerge around what the most effective Salesforce developer teams are doing.
If you’re customizing Salesforce to even a moderate degree (modifying Apex, employing at least 1-2 full time developers or admins) you’re investing a minimum of hundreds of thousands of dollars in Salesforce (if not millions). The investment is worth it because Salesforce is mission critical to your business.
Given that investment, here are a few of the habits we’re seeing from the top performing Salesforce teams across the world.
Most companies are underutilizing their Salesforce sandbox allotment. If you’re on the Pro pricing tier in Salesforce you get 10 sandboxes. If you’re an Enterprise customer you get 25. If you’re Unlimited you get 100! That’s a lot of sandboxes regardless of what platform version you are on.
And yet, we continually see large companies with the easy access to 25 to 100 sandboxes, allowing teams of 10, 20 or even 50 developers to make changes in a single sandbox.
This creates chaos unnecessarily.
They call a test org a sandbox because you are intended to “play” in it. You can tinker and experiment. You can try new things. If 50 people are operating in the same sandbox, experimentation becomes dangerous. Your change may overwrite another person’s code or introduce a bug or downtime. Cramming everyone into a single sandbox limits the freedom of the developer to do things and it slows down the entire organization.
Not using your sandboxes also makes deployments slow because you have to constantly keep track and tease apart which changes belong in which release.
Ultimately you are paying for your sandboxes so you may as well use them. The best Salesforce development teams understand this and do it. They set up a single sandbox for each developer and admin on their team.
This is low hanging fruit. If there is one change you could make to your Salesforce development process it should be this. There is literally no cost to using more developer sandboxes.
Transitioning can be done this week.
Since the best teams are following a one-sandbox-per-developer model, they need a way to successfully merge changes upstream.
To do this, they typically leverage 2 upstream sandboxes. One is called Integration and the other is UAT.
Integration
The Integration sandbox is for merging code. It’s a place where developers can frequently push their changes to ensure that there are no conflicts or cascading effects from different dependencies. The Integration sandbox is typically a Partial Copy sandbox (though it could also be full copy depending on your licensing).
It’s best if developers (and admins!) can push into the Integration sandbox as frequently as possible (more on this later).
UAT
UAT is often a Partial or a Full Copy sandbox, depending on the team size and licensing model. This is where business users and product managers and QA testers can verify that changes are meeting product specs and business requirements. Deployments are taken from the Int Org into UAT at a slower cadence. This would happen perhaps every sprint or so. The best teams are doing this daily or weekly, but there is a fairly wide distribution here depending on company size, industry, Salesforce use case, etc.
Keeping it simple in this way helps prevent unwanted changes from making their way into production and keeps your team shipping features faster than they currently are. Once you have the proper sandbox structure in place, you can follow some of these other best practices to ensure even better performance.
Pushing to Integration frequently solves myriad headaches down the line. It makes code reviews more effective, it reduces conflicts and unwanted clashes, it makes deployments faster and easier to do, and it encourages the team to follow Agile best practices by shipping small units of improvement frequently.
Admin or developer changes should go into the Integration sandbox as soon as possible. Imagine an admin who has dozens of open tickets for this release. Perhaps one ticket requires a change to a Custom Field and a Workflow rule, while a different ticket requires Layout changes and a new Field.
For many teams, the admin just goes about making all of their changes in a random fashion. Then, maybe once every two weeks, they will try to push everything that they’ve been working on in a single deployment or change set. This leads to the dreaded spreadsheet approach and the manual error prone steps of compiling change sets for large batches of work.
The best teams avoid this by having the admin or developer push changes as often as possible. Ideally 2-3 times a day. The best teams have the admin update the field and Workflow Rule and push it immediately. As soon as a ticket is finished the changes are pushed into the Integration environment.
This allows small units of change to undergo testing and code review. Because there are just a couple of changes, the Deployment is likely to go through very smoothly. It’s also very quick to code review because it’s only a few changes. This makes people far more likely to stick with the code review process.
Many managers insist on code reviews because they know it’s a good idea. But they get away from it because it’s cumbersome and hard to do. Smaller commits are easier to understand and parse. If your change set includes 100 differences it becomes difficult to determine what dependencies are involved and if the code is truly ready for release or up to standards.
Typically, when teams are doing these kind of large “code reviews” they don’t really review them at all. They just push and hope for the best.
Code reviews have been written about here and here and here. They are certainly a good idea. They allow you to get a second set of eyes on changes. The most important goal we hear from Salesforce stakeholders is that they want to avoid downtime and bugs entering their Salesforce instance. Downtime and bugs are extremely costly.
When Salesforce is down revenue doesn’t flow, orders aren’t fulfilled, trucks stop. Salesforce is mission critical for most organizations running it. And it isn’t cheap either (as you probably know). So it’s worth spending the time to understand changes, log the reasoning behind them and have them approved by another person.
The code review should happen just before the change is deployed into the Integration environment. If this is done early, it’s less necessary to review at UAT when changes start to become fairly large.
Using a tool like Bitbucket or Github or VSTS (or because we’re biased Blue Canvas) makes it easy for people to what changes are proposed and leave comments about the changes.
Though we are calling it a “code” review it really should be a “change” review with Salesforce. Declarative metadata changes like changes to Objects and Fields should be included. Not simply Apex code. Mediocre teams have code reviews in place for Apex. The best teams are reviewing every single aspect of their system. Because declarative metadata affects the user experience just as much as Apex code.
Ask your team: are you reviewing metadata changes in your code reviews?
These code or change reviews also encourage collaboration and understanding of your codebase. It is a mitigation strategy.
Imagine that someone on your team is responsible for a key piece of functionality. They worked hard on it and delivered it ver three sprints - pulling late nights and a lot of long days in the weeks leading up to the release. Once it’s deployed they go on a much deserved vacation. 2 days later when they are sipping pina coladas on a beach on Bora Bora, a problem is discovered. The sales team hates the change or it’s not having the intended effect. No one has has any idea how the change works and which metadata components are important and which are not. Rolling back becomes impossible until that person returns in two weeks.
If there had been a code review, another person would be responsible for understanding that code. Which means that you’d have another person on staff who could help you roll this back while the perpetrator is enjoying their vacation!
The same applies if you are using consultants. It’s good to have your team understand the changes that consultants are making and how they affect all parts of your system.
Finally, code reviews provide some level of permission based deployments for Salesforce. You can rest in the knowledge that there is a process behind what is changing in Salesforce and that only trusted reviewers have the ability to deploy changes.
Speaking of rollbacks, it still amazes us given the level of investment in Salesforce licenses, bolt on products, Salesforce dev/admin salaries, and how important Salesforce uptime is to their customers, how few teams are taking the relatively simple and inexpensive step to backup their Salesforce metadata.
Being able to rollback your Salesforce code is very comforting and allows your team to move boldly and with confidence instead of tiptoeing in fear.
Source control is the best way to backup your metadata You could also at least download metadata into a zip every day.
We know some teams who are literally paying their Admin to do the zip method every day. They believe that is the most cost-effective way to backup their metadata. We don’t agree with that logic persay (the potential cost of major downtime and the cost of time far out weight the benefit of doing it this way), but we have to tip our hat to them for taking some action!
Mediocre teams have an Ant script pull their metadata and commit it into Git every day. This is fairly brute force and also has a high maintenance cost, but it’s getting closer to what the best teams are doing.
The best teams are actually using source control as it was intended. Salesforce can make this tricky but it’s well worth the investment. Of course, you could get this benefit by using a tool like Blue Canvas to continuously back up your Salesforce Metadata (and avoid the challenges and costs altogether), but again we recognize our bias here.
At the very least, we cannot recommend highly enough some kind of backup strategy for Salesforce metadata. Salesforce simply makes it too easy to make unwanted changes. The system is so complex, people are often tempted to just make changes in production or it’s easy to push a change set that wipes out someone else’s changes (see the tragic tweet above).
Having the ability to recover your changes shows that you have thought about the business value of your Salesforce instance being up and allows your team to move fast without breaking things.
Salesforce DX is the bleeding edge of development experience for Salesforce. It’s been available for about 2 years now and though it is not perfect, understanding it is very important.
Rest assured, we have found very few teams that are actually using Salesforce DX today. Those that are tend to be greenfield projects where it’s easy to set DX up from the start. These teams also tend to be highly skilled developers who come from other platforms like .NET and are used to working with other tools like TFS or VSTS.
Legacy apps are generally unable to really support or use Salesforce DX.
Source: https://developer.salesforce.com/platform/dx
We would not actually recommend legacy apps even try to use Salesforce DX. It misses out on too many key facts about Salesforce. For example, it suggests Source Driven Development however it does not recognize the fact that the Org is and always will be the source of truth. As long as it’s possible to edit the org directly, it will be the source of truth. (Blue Canvas exists to solve this very problem in fact but that is for another essay!)
Still, understanding Salesforce DX allows teams to see the direction that Salesforce is heading. And it’s towards treating Salesforce like a real developer platform, just like .NET, Java or Node.js.
The best teams are closely following Salesforce DX, understanding it’s roadmap and it’s strengths and limitations. And they are positioning themselves to take advantage of it when the tooling matures.
The 7th habit of “Highly Effective People” is “Sharpen the Saw”. This means that the most effective people recognize that they are not perfect and they are constantly striving to improve themselves and their processes. It’s no different with the best performing Salesforce teams.
The very best Salesforce teams make time to think at a high level about their processes and whether or not that are as optimized as they can be. They pay attention to new tools that are coming out and follow trends like Salesforce DX.
They also make time to pay down technical debt and refactor code. They remove unused Apex classes and triggers with destructive changes. They build for the future and understand that, in the largest organizations especially, the changes that they are making today very well could still be running in Salesforce ten years from now. And so taking a few extra minutes to do a code review and to maintain and clean up your codebase and metadata is well worth the time and effort.