When developing application solutions, there inevitably comes a time when we need to deploy a new version in production.
There are many ways of doing this, but in this article I’d like to suggest a few effective and reliable solutions that I’ve had the opportunity to use over the last few years at Digiwin.
In this article, I’ll focus on code deployment solutions based on open source languages. These technologies often run on a Linux environment.
What is Continuous Deployment?
Automating deployment from produced sources is a necessity to bring reliability to the delivery process of new software versions.
Generally speaking, when we learn to develop software in school, we deliver the sources manually, via an FTP server for example. However, as in any industry, manual action can lead to human error.
That’s why it’s important, right from the initialization phase of your project, to set up a delivery procedure and automate it. We call this Continuous Deployment(CD).
Adapt your continuous deployment strategy to your project management
In ESNs, we generally contract for the realization of a batch of functionalities. This is why we often adopt a strategy of batch delivery of functionalities grouped together in a single version of the software.
With software publishers, we can find shorter cycles with the parallel realization of functionalities and production delivery of these new functionalities as we go along.
On the source management side, we often find two GIT methodologies adapted according to the project management method, Gitflow or GithubFlow.
Two main deployment approaches
The “Push” approach
The Push approach involves an external system, usually a continuous integration tool such as Gitlab Pipelines, Github Actions…
This tool will trigger a series of actions based on commits on GIT. This tool will then connect to the server or cluster to update the deliverable. It will then push the new modifications to the execution environment.
Advantages / Disadvantages
Plus points :
- Branch protection/access restrictions are handled at SCM level
- The deployment procedure is versioned, as are the sources
- Deployment status visible on SCM
- Many different tools can be used, thanks in particular to the use of Docker
Minus
- The tool must have technical access (e.g. SSH) to the operating platform.
- SCM stores credentials for connecting to servers
- Locked into a specific SCM. In the event of migration to another tool, the deployment procedure will have to be rebuilt.
What to do?
Classic Docker container hosting :
- The Docker daemon can be exposed via TCP for remote control. This makes it possible to order the execution of a container on the server directly.
- Alternatively, you can connect to the server via SSH and use local Docker commands, thus eliminating the need for Docker Compose.
The advantage is that you can continue to connect to the server via SSH and manipulate containers without the Pipeline tool.
Kubernetes :
- It is possible to connect to the Control PLane and control deployments using kubectl or Helm.
Non-containerized :
- We can upload sources via rsync or scp and then possibly connect via SSH to the server to execute commands (database migration, cache cleanup, etc.).
- You can use tools like Ansistrano to automate the transition between two versions of the source code.
The “Pull” approach
Version changes are managed from within the server or cluster.
This requires an operator (a program) to monitor the source manager (or other) in order to trigger the application update. It is this operator who defines and maintains the deployment state.
Advantages / Disadvantages
Plus points :
- There’s no need to expose servers to external tools, as flows are only outgoing, not incoming, for deployment.
- No need to modify the deployment procedure when changing SCM. You’re less locked into a single system.
- Administration rights can remain local only, and environment protections/restrictions cannot be altered from the outside.
- Deployment monitoring is easier, as the system is constantly synchronized with production. This makes it possible to have tools that are closer to execution.
Minuses:
- Secrets and environment variables are managed on the runtime side. It is therefore necessary to have an external backup of these data to be able to restore them in the event of damage.
- There may be a delay between the SCM update and actual deployment. This is generally based on a polling principle, which involves checking for updates at a certain frequency.
- Versioning of deployment configurations is independent of the application system. Care must therefore be taken when adding commands to the delivery of a new version, for example.
What to do?
Hosting classic Docker containers :
- You can simply check periodically that a new image is available on a registry and update it automatically. Example tool: Watchtower
- Deploy a runner on the server, which will remain waiting to execute a deployment pipeline. This mode is a bit of a hybrid with Push mode, as it means that access to the server is not publicly exposed, and it’s the runner that periodically checks to see if it has a task to perform.
Kubernetes :
- Use a tool like ArgoCD, which is a comprehensive tool for managing the state of a Kubernetes cluster.
Non-containerized :
- You can periodically monitor whether new sources are available on GIT or on an S3 Bucket.
Conclusion
Whether you use a Pull or Push approach, you’ll need to adapt your tools to the way you work. This choice must also take into account the players in charge of managing deployment procedures: OPS team or developers?
Your SCM will also undoubtedly influence the choice of your continuous deployment strategy .
Each of these strategies has its advantages and disadvantages. At Digiwin, we’re here to help you make the right choice and set up a reliable, long-term foundation for the deployment of your applications.