Loves Cloud Helps Client Deploy 2000+ Containerized Applications on Kubernetes Cluster
Top companies adopting new technologies to transform customer experience has become the order of the day. Even small and medium enterprises use DevOps and cloud to make their operations smoother. With Kubernetes, DevOps becomes simpler to be ingrained.
In 20__, [company name XYZ] approached Loves Cloud to help them embrace cloud and DevOps culture by deploying Kubernetes cluster as a stepping stone. Embracing cloud and DevOps had become indispensable for them as they are a conglomerate dealing in telecom, retail, and petrochemicals among other businesses. Any downtime or any kind of bug in the company applications would prove too costly for them. The wise developers of the company realized that it was time for the monolithic applications to make way for microservices in modular, containerized form. It was a mammoth task as more than 100 micro-services were to be deployed. All they looked for was a “robust solution” with a challenging timeline. Loves Cloud took the challenge and was happy to deliver.
Why this digital transformation was needed? What were the real problems?
The first problem faced by the client was scalability issues. As the company got bigger and successful over the years, the internal monolithic applications got heavier too with the addition of numerous new features and bug-fixes. As a result, updating these huge chunks of applications became extremely tedious and time-consuming. Hence, it became necessary for the company to use newer methods of application deployment. The company also thought to cut cost by using cloud services as opposed to physical data centres. Our client needed the ability to introduce new features and bug-fixes to existing applications without much delay and this could only be possible with the usage of containers and Kubernetes.
To meet the requirements of our client, we needed to first establish the goals. The first goal that emerged from our discussion with the company AVP was deploying Kubernetes on Azure stack.
As the company was already using Azure services, we decided to use Azure Stack where Kubernetes were to be deployed. Zero Downtime was one of the key requirements of the customer. We made this our second goal.
Our third goal was to make Kubernetes highly available and robust by adopting the model where several master nodes could control the same worker node thus eliminating the issue of single-point failure.
Since the company had CI/CD pipeline already in place, it became natural for us to connect Kubernetes to the pipeline by convincing the developers to deploy builds in Kubernetes and then test, analyze and release builds. Any company that starts using Kubernetes does so to implement DevOps in the future. Alongside, they also look forward to plan for disaster recovery to minimize downtime and reduce data loss. Our client too showed great foresight for this very cause. So, we got our fourth goal – to convert the entire deployment into Infrastructure as a code.
How we walked the talk:
Our first task was to set up Kubernetes in such a way that it remained highly available. As already stated, we chose Azure Stack to deploy Kubernetes. Now to make Kubernetes highly available, we used ETCD as database store and created multiple master nodes. The ETCD database store is itself failure-proof as it is replicated in every node and always checks whether the desired state and the actual state of the system are equal.
Again, we used multiple master nodes for a single worker node to make sure that the failure of a single master node does not stop the application to perform altogether. As the client-company caters to an astounding amount of requests coming from internal and external stakeholders, it became necessary to manage the load on the applications so that they do not collapse. We used ingress in this regard and configured it to manage our client’s hundred-plus micro-services to keep them functional. As our client already had a fully automated CI/CD pipeline in place, we decided to add the Kubernetes cluster as a deployment target in the CI/CD pipeline.
As part of the engagement, we converted entire Kubernetes cluster deployment into infrastructure as a code using Ansible Tower. This proved specifically useful to the client as its work required handling of sensitive customer information. IaaC also improved the efficiency of the company as the disaster recovery time was significantly reduced. We demonstrated the use of Prometheus to collect real-time metrics for Kubernetes cluster and used these metrics in the form of bars and charts using Grafana.
Tools recommended or used by Loves Cloud for the solutions to work smoothly:
- Azure Stack as the cloud platform.
- Kubernetes- to manage the containerized micro services.
- Ansible Tower- to convert the infrastructure into code
- Prometheus and Grafana to monitor real-time statistics.
- Python and Shell scripting for automation.
The results were overwhelming!
Loves Cloud was successful in helping the client achieve the desired results within the specified time frame.
- The client was able to deploy more than 100+ containerized applications with more than 2000 deployments on the newly designed Kubernetes cluster.
- We took care of the replication aspect and made sure that the Kubernetes cluster is recreated within 2 hours in case of any disaster through Infrastructure as a Code.
- We showed our client how the entire deployment process could be automated with the use of scripting languages.
Each and every step that we followed to deploy the cluster and the best practices we recommended was documented. This documentation was used later by the client to train its internal teams.
At Loves Cloud, we are constantly leveraging the power of various public cloud computing platforms along with multiple open source software solutions to automate, optimize, and scale workloads of our customers. To learn more about our services aimed at digital transformation of your business, please visit https://www.loves.cloud/ or write to us at email@example.com.